id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
40829313
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Pocket%20Plus
Samsung Galaxy Pocket Plus
The Samsung Galaxy Pocket Plus is an Android smartphone manufactured by Samsung that was released in January 2013 as the successor to the original Samsung Galaxy Pocket. The handset is still budget-oriented, with a relatively small 2.8-inch LCD. Its specifications are similar to that of the original Samsung Galaxy Pocket, with only minor upgrades such as the OS which is Android 4.0 ICS and the internal storage. The Pocket Plus is powered by an 850 MHz processor and offers connectivity options including 3G, Wi-Fi and Bluetooth 4.0. Internally, it comes with an upgraded 4 GB of storage which can be further expanded to up to 32 GB using a microSD card, and with 1200 mAh Li-ion battery. Features The Samsung Galaxy Pocket Plus comes with a 2.80 inch QVGA Display. The device includes a 1200 mAh Li-ion Battery, and offers connectivity options including EDGE, HSDPA, Wi-Fi(b.g.n) and Bluetooth connectivity. It also features GPS 2.0, a 2MP Rear Camera, a single SIM slot and the Social Hub app. The Social Hub combines every account registered on the phone to be unified in a single app. The phone runs Samsung's TouchWiz Nature UX skinned Android 4.0 OS. The Samsung Galaxy Pocket Plus is still marketed as "Pocket Friendly," because it can be slipped inside pockets easily. It has been preferred highly by people with visual impairments mainly because of its Android 4.04 ICS operating system as it functions well with explore by touch, unlike other versions of android Operating system. See also Samsung Galaxy Pocket Neo, successor released in June 2013 Samsung Galaxy Y Samsung Galaxy Mini Samsung i5500 (Galaxy 5) Notes External links Samsung mobile phones Samsung Galaxy Smartphones Android (operating system) devices Mobile phones introduced in 2013
8260771
https://en.wikipedia.org/wiki/Alpine%20%28email%20client%29
Alpine (email client)
Alpine is a free software email client developed at the University of Washington. Alpine is a rewrite of the Pine Message System that adds support for Unicode and other features. Alpine is meant to be suitable for both inexperienced email users and the most demanding of power users. Alpine can be learned by exploration and the use of context-sensitive help. The user interface can be customized. Features Alpine shares many common features of console applications, like a rich set of shortcut keys, using a keyboard instead of a mouse for all navigation and operations. In fact, all operations in Alpine have corresponding shortcut keys. Unlike other console applications targeting developers and experienced users, which often require users to edit a configuration file, Alpine lets users change most configuration options within the software. This makes alpine one of the most easy to learn console-based email clients. Alpine supports IMAP, POP, SMTP, NNTP and LDAP protocols natively. Although it does not support composing HTML email, it can display emails that only have HTML content as text. Alpine can read and write to folders in several formats, including Maildir, mbox, the mh format used by the mh message handling system, mbx, and MIX. Alpine includes its own editor Pico (Pico stands for PIne COmposer), which includes commands for basic editing of files, such as, search and replace, spelling, and justifying of text, besides cut and paste, and intuitive navigation commands. However, any editor can be used to compose messages in Alpine, using the Editor configuration variable. Besides being able to set up an alternate editor, users can configure more than a hundred variables and options to their liking, including setting up configuration for sending and receiving e-mail from different services, through an Incoming Folders collection and the use of personalities (called roles in Alpine), and therefore a user can share the same address book between different accounts. Alpine can also sort individual folders by several criteria, including threading, original sender, recipient, and size. Alpine also allows users to configure colors, filters, scores, and character set of the display among others. Your configuration and address books can be saved locally or on a remote IMAP server, so that they be accessible to you wherever you are. Alpine also handles encrypted and signed messages using the S/MIME standard. Although Alpine was designed to be accessible to beginners, it can be easily set up for more advanced users. All screens in Alpine include built-in help, which can be quickly accessed with the CTRL-G command. History University of Washington Alpine 1.0 was publicly released on December 20, 2007. On 4 August 2008, the UW Alpine team announced that after one more release, incorporating Web Alpine 2.0, they would "shift [their] effort from direct development into more of a consultation and coordination role to help integrate contributions from the community." This was taken to mean that UW no longer maintains Alpine, and left development to others. re-alpine fork In June 2009, a project named re-alpine was created on SourceForge. This was used as an upstream for patches from maintainers. In August 2013, the re-alpine project official announced the December 21, 2012, release of Re-alpine 2.03, their last official release. Current Since January 2013, Eduardo Chappa, an active software developer formerly from the University of Washington, has released newer versions of Alpine from his site. His announcement was made public on the Usenet newsgroup comp.mail.pine. Most major Unix-like systems currently use this as the primary upstream site. On March 17, 2017, Chappa announced the release of version 2.21. Version 2.22 was released on January 19, 2020. Version 2.23 was released on June 19, 2020. Version 2.24 was released on October 10, 2020. The latest stable released version, 2.25, was released on September 18, 2021 while the most recent developmental version, 2.25.1, was released on December 3, 2021. Name "Alpine" officially stands for Alternatively Licensed Program for Internet News and Email. UW has also referred to it as "Apache Licensed Pine". License Alpine is licensed under the Apache License (version 2 - November 29, 2006), and saw its first public alpha release December 20, 2007. This milestone was a new approach, since the alpha test of Pine was always non-public. See also Comparison of email clients Text-based email client UW IMAP References External links Software that uses ncurses 2007 software Free email software Email client software for Linux MacOS email clients Windows email clients Unix Internet software Portable software Console applications Free software programmed in C
39683877
https://en.wikipedia.org/wiki/Universal%20electronic%20card
Universal electronic card
Universal electronic card (UEC) () was an identity E-card that was issued to Russian citizens from January 2013 to December 2016. The UEC allowed remote order, pay and receive government services, replaces a number of documents, including medical insurance policies and pension insurance certificate, bringing together different services on a single card including electronic purse, debit card, electronic signature, the ticket and other possibilities. The project was abandoned on 1 January 2017 and the card electronic signatures were revoked in April 2017. History In July 2010, the Federal Law No. 210-FZ of July 27, 2010 "On the organization of public and municipal services", which defines the universal electronic card, the objectives of its creation and application areas. In February 2011, Russian President Dmitry Medvedev chaired a meeting of the "Commission for Modernization and Technological Development of Russia", dedicated to the implementation of one of the most important projects of the commission – the introduction of universal electronic card as a means of public and municipal services to the population. It became known that the Russian companies do not have time to make a 2012 microprocessor. Use the map to the microprocessor foreign production Federal Security Service, Russia's internal secret police considers dangerous. It was a political decision temporarily, in 2012, a microprocessor foreign production. In October 2011, Russian President Dmitry Medvedev signed a presidential decree on the application of the national emblem on the universal electronic card. They also found that there were difficulties in the implementation of the electronic systems of interdepartmental cooperation, the term of which is transferred to start mid-2012. And without starting the system to issue a map is meaningless. The term of issue of the universal electronic card rescheduled for January 1, 2013. Also in October 2011, it was decided to use only the Russian UEC payment system PRO100, excluding the participation of international payment systems Visa and MasterCard. And in the Savings Bank promised to "neutralize lobbyists international payment systems in the bodies of the Russian government". Since February 2012, in nine pilot regions of Russia: Moscow, St. Petersburg, the Astrakhan region, the Republic of Bashkortostan, the Volgograd Oblast, Novosibirsk Oblast, Penza Oblast, Krasnodar Krai and Tatarstan started issuing cards to participants of the project, in order to verify it works in practice. On 16 February 2012, there was published "How to use UEC", which set out the anecdotal evidence of the pin-code, which is possible only unfulfilled plans of developers: "If citizens are forced to enter a PIN code, then provided false PINs, the introduction of which will buy time and to provide prompt assistance to a citizen from the power structures". In September 2012 it was reported that the technologies used and tested in the UEC will be used in an electronic passport, which is scheduled to release in a few years. In November 2012, on the official website of "UEC" was launched availability monitoring of Russian regions for the launch of the project. Willingness implies a destination charge of UEC in the region, providing secure channels of communication, providing access to regional public services by UEC, the establishment of call centers, etc. The monitoring data is updated once a week. In December 2012, there was an order that all multi-function centers, providing state and municipal services should be equipped with card readers for universal electronic cards. In January 2013, the certification of microprocessors for universal electronic cards from the Russian company "Mikron" was completed. Also, in some universal electronic card with a microprocessor foreign production, presented by JSC "Atlas of cards". This is done to during the competition between the two companies to reduce cost of production of UEC. Map started to give the citizens of Russia on the application, which can be fed into the open across Russia points for receiving applications and issuing cards. President of "UEC" Alex Popov said that the card uses an electronic signature CryptoPro. In February 2013, it was announced that some regions of the Russian Federation does not have enough money to ensure the production and issuance of cards. Sberbank has decided to be part of the shareholders authorized regional organizations, enabling them to finance the work of, and to create a unified regional processing centers and billing, who will take over as the expenditure part of the issuance of all of UEC, and as the revenue side will be engaged in accrual payments for utilities across the region and will be transport operators. In the spring 2013, issue of regional social cards stopped. Instead, citizens there would be given the universal electronic cards. In the late spring of 2013, it was planned to open a portal for citizens' identification and payment of public services. Citizens would be able to register on the portal with the help of UEC, using a card reader. In December 2016 the Federal Law No. 471-FZ of December 28, 2016 defined the project cancellation. Details On the reverse side of the card identification data of the citizen is printed: name, surname, given name(s), sex, date of birth, holder's signature, photograph, credit card number and expiration date, and identifiers of the mandatory pension insurance system – SNILS and compulsory health insurance system – insurance policy OMC. Stored on the chip is card identification data indicated on the reverse side of the card, reinforced by an electronic signature, data transport application on tickets purchased and the remaining trips. Other details about the citizens are still stored in the databases of public institutions, and the UEC only helps to search for matching records about the holder when handling them to the appropriate agency using the secure channel communications system of inter-agency electronic interaction. Security and safety Developers of UEC claim to use the data without holder's consent is not possible. To use the card requires a PIN, which the citizen must for security to keep away from the card and kept secret from everyone. In the private office citizen can install additional security and limit the use of the service. If the card is lost it can be locked in a private office or by calling the hotline. This information is automatically registered with the government services that provide e-services and take advantage of a lost card will be impossible. In paragraphs accepting applications documents are not left on the table at once sent to the safe. Video surveillance. After receiving the application information is sent over secure communication channels to Moscow. After producing the card delivery service of special communication sent in a separate envelope with the PIN codes and separately sends universal electronic cards – they are only on the points of issue. Reading the information from the card terminal can only be certified of UEC. Blanks for UEC are made at two plants to join the single payment-service system "Universal Electronic Card": a state of "Goznak" and ZAO "First Printing House". Unlike bank cards, universal electronic card are made of laser-sensitive polycarbonate which has a high resistance to wear. There are other security measures that are applied to the card. Application of personal information is laser engraved in the regional centers of the universal electronic card personalization. As the laser engraving process the photographic image and the text are not on the card surface, and the film screen, to replace the photo or text document without damaging it is impossible. At UEC card installed microprocessor, a certified Russian Federal Security Service in accordance with the requirements of information security. Software that is installed on the stage of connecting electronic signature is certified by Federal Security Service and performs all of the most stringent requirements for information security. Selected for the UEC cryptographic algorithms and plans to study their safety were found to correspond to the highest standards of information security at the Fifteenth International Conference "RusKripto'2013" on 30 March 2013. References Government of Russia Russia Law enforcement in Russia
22549791
https://en.wikipedia.org/wiki/Techskills
Techskills
TechSkills is a nationally recognized career training school, specializing in skills and certification training for careers in Information Technology, Healthcare Services and Business. It is a private, for-profit provider of career training for adult learners. The company operates 18 campuses in 13 states in the United States. History TechSkills was founded in 1996 as AmeriTrain by John Poland. Its first campus opened in Brookfield, Wisconsin in 1996. In 1999, the company was renamed TechSkills. Five new campuses were opened in 1999, with 12 campuses added in 2000, nine campuses in 2001, and one campus in 2002. TechSkills’ corporate headquarters was relocated to Austin, Texas in 2001. Beginning in 2010, TechSkills LLC campuses began to see dramatic decline and began closing multiple campuses and selling other(s) to TechSkills of California (Las Vegas). More campus closures are expected throughout 2011-2012 respectively. More than 40,000 students have graduated from their programs. Campus locations Accreditation The following Tech's kills campuses are nationally accredited by the Accrediting Council for Continuing Education & Training (ACCET): Brookfield, Charlotte, Cincinnati, Columbus, Kansas City, Las Vegas, Sacramento, and San Jose. Courses of study TechSkills offers career education programs in Information Technology, Allied Health Services, Accounting & Bookkeeping, and Project Management. TechSkills students may pursue industry certification through Information Technology vendors including Microsoft, Cisco, Oracle, CompTIA, Prosoft, and (ISC)2; and through Health Services associations including the National Healthcareer Association and the Pharmacy Technician Certification Board. Information Technology graduates may enter careers in IT engineering, network design, software development, database administration, IT security, web design or computer hardware. Health Services graduates may enter careers in medical coding, medical billing, medical transcription, medical office administration or pharmacy. Learning model TechSkills utilizes blended learning to deliver its education programs, and emphasizes flexibility and preparation for employment. References External links TechSkills Vocational education
32821005
https://en.wikipedia.org/wiki/Timeline%20of%20the%202011%20Libyan%20Civil%20War%20and%20military%20intervention%20%2819%20March%20%E2%80%93%20May%29
Timeline of the 2011 Libyan Civil War and military intervention (19 March – May)
The events regarding the military intervention on 19 March can be tracked in the related articles: Timeline of the 2011 Libyan Civil War before military intervention Timeline of the 2011 Libyan Civil War and military intervention (June–15 August) Timeline of the 2011 Libyan Civil War and military intervention (16 August – 23 October) The Libyan Civil War began on 15 February 2011 as a civil protest and later evolved into a widespread uprising. However, by 19 March, Libyan forces under Colonel Muammar Gaddafi were on the brink of a decisive victory over rebels in Libya's east. That day, leading NATO members acted on United Nations Security Council Resolution 1973 which authorized member states "to take all necessary measures... to protect civilians and civilian populated areas under threat of attack in the Libyan Arab Jamahiriya, including Benghazi, while excluding an occupation force". Following the implementation of a Libyan no-fly zone, the Battle of Ajdabiya and the Second Battle of Benghazi, the rebels recovered all territory lost in the Gaddafi loyalist offensive. However, due to the bulk of their forces being armed civilians lacking both leadership and communication, they quickly fell into disarray and were forced into retreat. With NATO attacks increasing in both severity and frequency, numerous defections amongst Gaddafi's military and political elite, and steadily increasing support from the international community, the opposition was able to make limited gains towards the end of May. After being besieged since February, opposition forces pushed loyalist forces out of Misrata and into its outskirts, beginning a battle for control of the area. Start of coalition intervention (19–25 March) 19 March By unanimous vote, Denmark's Parliament authorized direct military action by its air force to help enforce UN Security Council Resolution 1973, marking the only time so far in the state's history that military commitment was supported by full parliamentary unity. Despite the Libyan government's assertion that they were respecting the UN-mandated ceasefire, pro-Gaddafi forces entered the suburbs of Benghazi, and were shelling the city with mortars and artillery. Pro-Gaddafi forces were reported in the suburbs of Benghazi and shelling the city with artillery from away. Pro-Gaddafi tanks inside Benghazi were reported by a BBC News journalist. Also on 19 March, a MiG-23BN was shot down over Benghazi by ground fire. A rebel spokesman later confirmed that the plane belonged to the Free Libyan Air Force and had been engaged in error by rebels. Fox News initially reported that the plane was "bombing their (rebels) eastern stronghold". The Libyan government said the rebels violated the UN "no fly" resolution by using a helicopter and a fighter jet to bomb Libyan armed forces. Libyan state television said that Libyan Interior Minister Abdul Fatah Younis, who had defected to the opposition one month ago, had returned to the government. The head of the Libyan National Council, Mustafa Abdul Jalil, denied this report. On a live phone call with Al Jazeera, Younis confirmed that he was still in Benghazi and on the rebels' side, and that videos of him on state television were from a meeting with Gaddafi months ago. At 4 pm (Libyan time), French fighter jets began entering Libyan airspace in reconnaissance missions in support of the Libyan no-fly zone. Mohammed Nabbous, founder of Libya Alhurra TV, died in the Second Battle of Benghazi. At 6:45 pm local time, a French fighter jet fired the first shots, hitting government military armored vehicles. Al Jazeera reported the next day that the air raid had destroyed a convoy of fourteen tanks and several trucks loaded with RPGs and ammunition, fourteen pro-Gaddafi soldiers were reported dead after the raid. According to Al Jazeera, some citizens of Benghazi were leaving the city and moving northwest to other anti-Gaddafi held towns. Later reports from Al Jazeera/Pentagon relations told that American warships had launched cruise missiles at Libyan airfields as part of a multi-phase operation called Odyssey Dawn. After that, the US planned to focus on the skirmishes on the ground. The Pentagon said that the United Arab Emirates and Qatar would also be involved in military operations in Libya, but would announce their involvement themselves. Operation Odyssey Dawn, falls under the operational command of the AFRICOM, under General Carter Ham. Tactical execution was being run out of the USS Mount Whitney, Admiral Sam Locklear commanding. Off the coast of Libya, there were: 11 vessels from Italy, 11 from the US (including three submarines, each with 112 missiles on board), one from the UK, one from France and one from Canada. Later that day, Gaddafi's Secretary General of the People's Congress, Mohammad al-Zawi, had a press conference in which he repeatedly claimed that the ceasefire was in action, contradictory to reports from Al Jazeera, BBC News and other Western news sources. Libyan state television later claimed that Gaddafi forces had shot down a French military plane. The African Union Ad Hoc High-Level Committee on Libya met. 20 March At about 2:30 am local time, anti-aircraft gunfire was heard for about seven minutes echoing throughout Tripoli; it was unknown what targets were hit. Later, Libya state television reported that airstrikes killed 48 people and wounded 150 in civilian areas. Citing an armed-forces statement, it said Tripoli as well as Benghazi, Misrata and Zuwarah were also hit. Libyan armed men on the port of Tripoli captured an Italian ship with eleven crew members – eight from Italy, two from India and one from Ukraine. The ship, the ASSO 22 – belonging to the Augusta Off Shore Society – was sending staff of Eni, an Italian multinational oil-and-gas company, to Libya. It had arrived in Tripoli in the afternoon of 19 March; after the staff departed, the crew was held up by the armed personnel of the port. Al Jazeera reported that on 20 March an air strike had destroyed fourteen tanks, twenty armored-troop carriers and several trucks filled with ammunition and rocket-propelled grenades, killing fourteen pro-Gaddafi soldiers in the process. Three of Gaddafi's tanks had just reached the centre of rebel-held Misrata, a spokesman for the armed opposition told Al Jazeera. The same source reported that pro-Gaddhafi boats did not allow help to reach Misrata. Casualties were reported to be too high to count. The People's Republic of China, the Russia and the Arab League condemned the violence from the US and France with Russia claiming that the UN Security Council resolution had been "hastily adopted". At about 5:10 pm local time, a health official in Tripoli increased the death toll from forty-eight to sixty-four. More anti-aircraft fire was heard in Tripoli at about 8:30 am local time. At 9:00 pm local time the government once again called for a ceasefire. However, Al Jazeera reporters later questioned the spokesperson for the military, he replied that no one would in the public would be informed of this by media. At 10:26 pm local time a loud explosion was reported nearby Gaddafi's compound, Bab al-Azizia in Tripoli. It was confirmed by CNN that a building at the heart of the Azizia compound had been destroyed. 21 March It was reported by Al Manara Media, that Khamis Gaddafi, the leader of the Khamis Brigade and Gaddafi's son, had died from his injuries sustained when Muhammad Mokhtar Osman allegedly crashed his plane into Bab al-Azizia a week earlier. However, this was not confirmed by any other independent media. The crashing of the plane had also not been previously reported or confirmed by any other independent media except Al Manara. Swiss journalist Gaëtan Vannay, reporting from Zintan in Tripolitania, said the eastern environs of Zintan had seen fighting between rebels and attacking government forces for the past two days. There was heavy shelling on the city by pro-Gaddafi forces. Following a meeting with UN Secretary-General Ban Ki Moon, Arab League Secretary-General Amr Moussa stated his support for the no-fly zone being implemented over Libya. The BBC News reported that the Libyan Permanent Mission to United Nations was now flying the former Libyan flag and an image of this flag appeared on the mission's official website which had also removed an image of Gaddafi from its index page. There were four pro-Gaddafi tanks on the road outside Ajdabiya, which fired on any vehicle that goes over a rise approaching the city. Russian Prime Minister Vladimir Putin, stated that the execution of the no-fly zone was comparable to the medieval crusades. Russian President Dmitry Medvedev described Putin's comments as "unacceptable". 22 March Late 21 March, Al Arabiya quoted a Libyan government official who said administration of Misrata was re-taken by pro-Gaddafi government, however rebels claimed that they still controlled the city. A caller from Misrata to CNN's Anderson Cooper 360° reported that there was still resistance. Haaretz reported that the Commander of the International Coalition for Libya stated that civilians were being targeted by pro-Gaddafi forces in Misrata. A Misratan resident told The World Today that pro-Gaddafi forces in Misrata had been ordering people to march to the city square and demonstrate their support for Gaddafi's government and that when an anti-Gaddafi demonstration broke out, the pro-Gaddafi forces fired into the crowd of 4,000 using heavy artillery and snipers, killing twenty-seven people. According to the results of a ComRes/ITN poll released on 21 March, one in three Britons agreed with the decision to take military action in Libya. The survey found that 43% disagreed with the action and 22% were unsure. However, a separate survey by YouGov showed the opposite result with a majority of Britons questioned supporting the campaign. According to a Guardian reporter embedded with the rebels, coalition forces began bombing Gaddafi forces around Adjabiya. A USAF F-15E warplane crashed in Libya; its pilots had minor injuries. One of the pilots was recovered a US V-22 Osprey; the other was found by anti-Gaddafi forces and given shelter. The crash was blamed on mechanical failure. In response to Russia's call for an immediate cease fire, US Defense Secretary Robert Gates said, "some Russians were swallowing Muammar Gaddafi's lies about civilian casualties in Libya" and said the coalition forces were doing their best to avoid civilian casualties. Hussein el Warfali, commander of the pro-Gaddafi Eagle Brigade in Tripoli, was reported killed. Plans by coalition forces to hit Gaddafi's compound with Storm Shadow cruise missiles were aborted after journalists were taken there to show them the damage from the original attack. Following an interview with US Secretary of State Hillary Clinton by ABC News, it was reported that "People close to Muammar Gaddafi are reaching out to allies around the world exploring their options". Clinton also stated that she was aware of reports that one of Gaddafi's sons had been killed in non-coalition air strikes but that the "evidence is not sufficient" for her to confirm this. 23 March Gaddafi's forces launched a heavy bombardment on Zintan. Loyalist forces in Misrata captured the main hospital and positioned snipers on its roof. The opposition requested a hospital ship be sent to Misrata. Coalition forces launched air strikes on Gaddafi's forces within the city of Misrata itself. Coalition jets also launched air strikes against Gaddafi forces at Ajdabiya's eastern gate. People fleeing the city stated that only the center of the city remained in rebel hands while the outskirts were under government control. A British military officer claimed that Libya's air force "no longer exists as a fighting force" due to coalition air strikes and that coalition aircraft are now "operating with impunity" over Libya. Libya's pro-democracy fighters formed an "interim government", headed by Mahmoud Jibril. Eight explosions from coalition bombings were heard after sunset in eastern Tripoli. The Bab Al-Azizia compound was also bombed again. Pro-Gaddafi forces re-entered Misrata and attacked its main hospital. 24 March: NATO takes command of naval operations Fourteen Tomahawk missiles were launched overnight in Libya by the allies, according to a spokesman for the US Africa Command in Germany. Fighting in both Ajdabiya and Misrata continued. Tanks re-entered Misrata after previously retreating. The government attempted to capture Misrata's port with naval warships, but witnesses claimed they retreated. The coalition planned to step up air strikes on Ajdabiya and government positions, and was beginning to coordinate with rebels. A French fighter, destroyed Libyan pro-Gaddafi Soko G-2 Galeb single-engine military trainer aircraft which was breaking the no-fly zone. A member of the opposition forces claimed a major success – killing thirty government snipers in Misrata. He also said that all Libyan government military vessels have abandoned the port. The Al Jufra Air Base was bombed by French aircraft. Libyan Deputy Foreign Minister Khaled Kaim stated that the base was struck before dawn. A spokesman for the anti-Gaddafi forces said that Gaddafi troops in Ajdabiya were trying to surrender. Col. Ahmad Omar Bani stated that the anti-Gaddafi forces "received information from freedom fighters in Ajdabiya" saying "some Gaddafi fighters have offered to leave their tanks". NATO took command of the naval arms embargo. 25 March: NATO takes command of air operations The United Arab Emirates committed six F-16 and six Mirage fighters to help enforce the no-fly zone over Libya. Western warplanes bombed Gaddafi's tanks and artillery in eastern Libya to try to break a battlefield stalemate and help rebels take the strategic town of Ajdabiya. French and British jets bombed targets near Ajdabiya the eastern town overnight, including government forces' artillery. Gaddafi was said to be arming volunteers to fight the uprising against his rule. CNN reported that Canadian Lt. Gen. Charlie Bouchard was to command the NATO military air and sea operations under the name Operation Unified Protector. Abdul Ati al-Obeidi, a former prime-minister of the Gaddafi government, stated that the Gaddafi government was open to talks with the pro-democracy rebels and accept political reforms, possibly including elections. Late March rebel offensive (26–28 March) 26 March Rebel forces recaptured Ajdabiya after Gaddafi forces retreated and with little threat also captured Brega; they also advanced towards Ra's Lanuf. The outskirts of Misrata were hit by airstrikes. Benghazi resident Iman al-Obeidi entered a Tripoli hotel, showed bruises and scars to foreign journalists and stated that she had been sworn at, tied up, urinated on and raped by fifteen pro-Gaddafi security officers, who had detained her for two days (photos:). Pro-Gaddafi security officers assaulted several journalists and removed the woman from the hotel. The security officers said that they were taking al-Obaidi to a hospital. Al-Obaidi said she was being taken to prison. Charles Clover, a journalist with the Financial Times who attempted to defend her from Gaddafi's security forces, was deported from Libya. Clover said that he had been told the night before to leave because of what Libyan government officials said were "inaccuracies in his reporting". Bilgasim Al-Ganga, number three in pro-Gaddafi forces, was said to have been detained by anti-Gaddafi forces. Gaddafi's forces heavily bombard Misrata, sometimes targeting residences. One family was reported killed by such a blast. Gunners on rooftops shot at anyone who went out onto the street. The French army issued its eighth press release on Opération Harmattan where it stated that at least five Soko G-2 Galeb fighter planes and two Mil Mi-24 attack helicopters that were preparing attack in the Az Zintan and Misrata regions were shot down in the last twenty-four hours. They were making further analyses to have a more-precise assessment. 27 March Rebel fighters took control of the small town of Uqayla and the oil town of Ra's Lanuf. They also recaptured Bin Jawad, a hamlet another to the west as they moved towards Sirte, Gaddafi's hometown. With the capture of Bin Jawad, the rebels regained all the ground in the east they lost to the pro-Gaddafi counter-offensive in the days prior to the imposition of the UN-mandated no-fly zone. Rebel forces claimed to advance further along the coast, taking the town of Nofaliya, from Sirte. Rebels said they planned to halt their advance there overnight as pro-Gaddafi forces were supposedly stationed down the road ready to defend Sirte. NATO said it had agreed to take over command of the military operation that was enforcing UN resolutions on Libya. Eight people were killed and twenty-four wounded in Misrata as pro-Gaddafi forces fired mortars while advancing towards the city from the west. The rebels signed an oil export deal with Qatar, and announced that shipments of crude would start in "less than a week". 28 March National Transitional Council spokesman Shamsi Abdul Molah made the incorrect claim that the rebel forces had captured Sirte. Pro-Gaddafi troops used Grad multi-rocket launchers to bombard Az Zintan from the north. Al Jazeera reported that pro-Gaddafi soldiers in the town of Jalu, some south of Ajdabiya, surrendered to rebel forces. Agence France-Presse reported that the rebels' advance on Sirte had been halted about from the edge of Sirte by pro-Gaddafi forces. The opposition forces seized Nofaliya and were advancing towards an area called the Red Valley, but found it to be full of mines. Fighting was ongoing near Nofaliya. Al Jazeera reported that Qatar became the third state (after France and Portugal) to recognize the NTC as the only official government of Libya. Pro-Gaddafi forces heavily bombard Misrata with tank shells and mortars from the north and northwest. Pro-Gaddafi snipers continued to shoot the city's residents. About 4,000 migrant workers, mostly Egyptians, were waiting by Misrata's port to be evacuated. By evening, the rebels had advanced to about from Sirte. UK jets bombed ammunition bunkers in southern Libya. The UK Defence Ministry also stated that several tanks and armoured vehicles were also destroyed, including "22 tanks, armoured vehicles and artillery pieces in the vicinity of Ajdabiya and Misrata". Second loyalist offensive (29–31 March) 29 March Pro-Gaddafi troops used heavy weaponry to force the rebels to retreat from Nofaliya towards Bin Jawad. A US Lockheed P-3 Orion fired on a Libyan 12-meter patrol vessel after it launched missiles at merchant ships in the port of Misrata "to stop them from firing into the port, harming civilians and damaging other vessels". A US A-10 Thunderbolt II also fired on two smaller Libyan vessels traveling with the larger ship, destroying one and forcing the other to be abandoned. US Navy Vice Admiral Bill Gortney, US Joint Staff Director, confirmed the US had employed "A-10s and AC-130s over the weekend". The mother of Iman al-Obeidi said she had been asked to convince her daughter to retract the allegations in return for her freedom and cash or a new home. "Last night at 3, they called from Gaddafi's compound and asked me to convince my daughter Eman to change what she said, and we will set her free immediately and you can take anything you and your children would ask for," her mother stated. Rebels temporarily retreated from Bin Jawad after intense mortar fire, but returned and fighting continued directly west of Bin Jawad, according to the BBC News. Later, it confirmed that the rebels had once again retreated from Bin Jawad, this time some of them as far as Ra's Lanuf. The US Defense Department said that coalition forces launched 22 Tomahawk missiles overnight, while flying 115 strike sorties. US Secretary of State Clinton and an envoy from the opposition forces held talks in London. 30 March Forces loyal to Gaddafi recaptured the oil-refinery town of Ra's Lanuf forcing rebels to retreat further to the east. The coalition then began launching air-strikes against Gaddafi's forces around Ra's Lanuf and on the road to Uqayla. Reporters on the ground blamed a sandstorm and low visibility from the air for the lack of air-strikes the previous day. Later during the day, rebels were pulling out of Brega to Ajdabiyah. In the evening, BBC News reported that Brega was in control of pro-Gaddafi forces and Ajdabiya was coming under attack by loyalist troops. The UK government expelled five diplomats from the Libyan embassy in London stating that they "could pose a threat" to UK security. Rebel forces, suffering a shortage of skilled armored vehicle drivers, sent out an appeal by Free Libya radio for anyone who could drive a T-55 tank to report for duty immediately. It was reported that US Central Intelligence Agency and UK MI6 operatives were in Libya to gather intelligence for coalition airstrikes and make contacts with rebel forces. The UK government confirmed that the Libyan Foreign Minister, Moussa Koussa, had resigned and defected to the UK. The head of intelligence, the oil minister, the secretary of the Libyan General People's Congress and a deputy foreign minister also defected and escaped Libya. 31 March Explosions reportedly shook an eastern suburb of Tripoli, as warplanes staged a raid on the Libyan capital. NATO took sole command of air operations in Libya as US Central Intelligence Agency operatives worked the field to connect with rebel fighters who had seen their advance toward Tripoli halted. Rebels counter-attacked against Brega in an attempt to push out Gaddafi's forces. By late afternoon, coalition forces had bombed pro-Gaddafi forces near Brega. BBC Monitoring quoted a privately owned online newspaper, Libya al-Yawm, that Gaddafi's forces had been pushed back to the village of Bishr, west of the city. However, it was confirmed by rebel sources that street fighting was still ongoing in Brega. By the end of the day, pro-Gaddafi forces had repelled the rebel counter-attack and were in firm control of Brega. Libyan diplomat Ali Treki fled to Egypt and resigned. Gaddafi spoke out to those like him and called them "traitors", swearing to block their access back to the country once the war is over. According to Al Arabiya, Gaddafi's intelligence chief Abu Zayd Durda had fled to Tunisia. There were widespread unconfirmed reports that multiple additional high-level Gaddafi government officials were planning to defect. Gaddafi's spokesman, Moussa Ibrahim, downplayed Koussa's defection by claiming that the former minister is "an old man who cannot take the pressure". Early April (1–15 April) 1 April Gaddafi's forces intensified their attack on Misrata, using tanks, RPGs and artillery on civilian buildings. Rebel forces on the outskirts of Brega were bolstered by the arrival of a number of well-armed, seemingly well-trained soldiers in full military attire. It was not immediately clear where they were from. Al Jazeera reported that Swedish lawmakers approved a plan to send up to eight fighter jets and one transport plane to join the NATO-led air operations over Libya. Seven civilians died and twenty-five were wounded in a coalition air strike on a pro-Gaddafi convoy in eastern Libya. The strike hit a truck carrying ammunition, and the resulting explosion destroyed two nearby homes. NATO airstrikes destroyed a pro-Gaddafi military convoy attempting to enter Misrata. Rebels said neither side could claim control of Brega, one of a string of oil towns along the Mediterranean coast that had been taken and retaken by each side in recent weeks. Warplanes flew over Brega, followed by the sound of explosions. The Benghazi-based opposition offered a conditional ceasefire, but Gaddafi's government described its terms as "mad", saying it required the Gaddafi government to withdraw troops from cities. A rebel convoy near Brega fired into the air with an anti-aircraft gun, perhaps in celebration. A NATO A-10 Thunderbolt II aircraft then returned fire and killed at least thirteen people. 2 April In the early morning, rebels managed to break through the eastern gate of Brega. They took control of most of the city from loyalist forces before loyalist artillery hit the rebels and by mid-afternoon the opposition forces had retreated from the town and were regrouping at a checkpoint to the east of the city. Later, the rebels managed once again to enter the town and took control of a part of it. However, a large number of Gaddafi's forces were holed up at the university and the rebels were still not able to get into the city center. AVAAZ.org created a petition with the goal of convincing Turkey to request the Gaddafi government to free al-Obeidi. Libya's rebel council named what it called a "crisis team", including a new armed-forces head, which was to administer parts of the country it was helping in its struggle to topple Gaddafi. In Misrata, after weeks of shelling and encirclement, pro-Gaddafi forces appeared to be gradually loosening the rebels' hold there, despite Western air strikes on pro-Gaddafi targets. The rebels said they still controlled the city center and the sea port, but Gaddafi's forces had pushed into the center along the main thoroughfare. 3 April In the morning, the rebels advanced in an attempt to take Brega's university campus only to be ambushed and came up on roadside mines which led to a rebel retreat 30 to 35 kilometers from the town. Some of the better-trained rebels were still left behind on Brega's outskirts locked in skirmishes and artillery duels with Gaddafi's forces in the town. Gaddafi's forces shelled Zintan, southwest of Tripoli. Water and electricity shortages worsened in Misrata, particularly on the outskirts of the city. The area near the central-city hospital was bombarded. The US agreed to NATO's request for a 48-hour extension of US participation in coalition air strikes against targets in Libya. The Chairperson of the African Union traveled to Europe to discuss the Libyan crisis. 4 April The Chairperson of the African Union was received in London by the Foreign Secretary. Libyan rebels again advanced towards the oil town of Brega. Backed by coalition airstrikes, revolutionaries pushed back into Brega and were holding the eastern part of the town. Revolutionaries reported that Gaddafi's forces were leaving land mines after withdrawing from the area around Brega's university. Italy recognised the interim national council in Benghazi as its "only legitimate interlocutor" in Libya, Foreign Minister Franco Frattini said on Monday. In southern Libya, pro-Gaddafi forces attacked oil fields that supply the opposition-held oil terminal in Tobruk. A resident of a town in the Jabal al Gharbi area, about southwest of Tripoli, told Agence France-Presse that Gaddafi's forces fired Grad rockets at the town of Nalut. 5 April The Turkish Navy intercepted a private ship from Benghazi that was carrying weapons, ammunition, and medical supplies to the people of Misrata. The Turkish navy refused to allow the ship to land at Misrata. Rebels pulled back, after a night of fighting, from inside Brega to the outskirts. They re-assembled at the eastern edge to prepare for further fighting. At that point, loyalist artillery started firing on the rebels which led to them pulling back from Brega's outskirts in a panicked retreat. After that, an eight-vehicle military convoy, belonging to Gaddafi's forces, approached the rebel's positions, east of Brega; it was hit by air strikes destroying two vehicles while the rest turned back. An Agence France-Presse reporter confirmed there were no fatalities among loyalist forces in the attack. An Al Jazeera correspondent reported that the defected 36th "AsSaiqa" battalion was attacking from the south of Brega in the desert. They were launching a heavy attack against Gaddafi's forces using Grad missiles. They had managed to push west once more towards the area of Arbaeen which caused Gaddafi's forces to retreat. It was difficult to verify casualties, but four shells landed directly in an area where revolutionaries were congregated earlier. Ambulances were seen rushing to that area and they had not come back since. Defected interior minister and Major General Abdul Fatah Younis criticized NATO by saying it was not doing enough, and that "civilians are dying every day". Younis talked about how sewage was being re-routed into water wells in Misrata by Gaddafi's forces, and that water supplies were running desperately short in Misrata. 6 April Al Jazeera reported that the rebels in Nalut and Zintan entered Yafran and helped their allies there to fight against Gaddafi loyalists. Armed with Kalashnikov rifles, they were able to drive Gaddafi's forces from Yafran. An escaped prisoner reported that rebel soldiers captured by pro-Gaddafi forces were being tortured in an underground prison in Sirte. There was heavy fighting along the coastal road between Ajdabiya and Brega, Reuters reported. Gaddafi loyalists had been resupplied with ammunition. The rebels were, at that point, some away from Brega. The UK moved four Typhoon jets from policing the Libyan no-fly zone to ground-attack roles following opposition criticism that NATO forces failed to protect Misrata. In a statement, the UK defence ministry said the move aimed at "further bolstering NATO's ground attack capability". Rebels and loyalists continued to clash on the Brega–Ajdabiya road. During the early morning, a ship carrying 300 migrants from Libya capsized in rough seas off Italy. Survivors told of trying to reach rescue boats as those unable to swim screamed in the darkness and pulled one another under water. There was speculation that 250 people of the 300 died. If confirmed, it would be the deadliest crossing to Italy in recent memory and the worst refugee disaster since a wave of migrants began arriving in Italy in January after popular revolts in Tunisia, Egypt and Libya. The Italian Coast Guard rescued forty-eight people later in the day and a fishing boat picked up three after the vessel began taking on water shortly after 1 a.m. The Chairperson of the African Union was received in Rome by Italian Foreign Minister Franco Frattini. 7 April A Libyan rebel tank convoy was mistakenly hit by a NATO air strike killing thirteen and wounding many. British admiral and NATO commander Russell Harding refused to apologize for this attack, saying that "until yesterday we didn't even know that the rebels possess any tanks", his task was to protect civilians and "it is not our task .. to improve communications with the rebels". Following the attack, Gaddafi loyalists chased the rebel forces to Ajdabiya, and both civilians and some rebels were on the verge of retreating from the city amid rumors Gaddafi's forces were preparing for an attack. The US might consider putting troops on the ground for a ground offensive, Army General Carter Ham said. NATO airstrikes killed two rebels and wounded ten in Brega. 8 April NATO confirmed it had bombed rebel tanks. UNICEF reported, based upon consistent reports from local sources, that pro-Gaddafi snipers in Misrata showed a pattern of indiscriminately targeting children. The Red Cross was sending a team on a boat to Misrata to investigate. The US Department of the Treasury announced new sanctions against five senior pro-Gaddafi persons: Baghdadi Mahmudi, Shukri Ghanem, Abdulhafid Zlitni, Tohami Khaled and Bashir Saleh. The US now had personal sanctions in place against sixteen people and had frozen over US$34 billion in overseas assets related to the Gaddafi government. Pro-Gaddafi's forces attacked an eastern district of Misrata. After heavy fighting, they were pushed back. Earlier, rebels tried to cut off the main road with big containers in an attempt to isolate the snipers in the city center. However, the loyalists positioned one or two tanks on the main road to counter the rebels' attempt. These tanks were disabled by the rebels, and several of Gaddafi's snipers were killed or captured. Families were seeking refuge in schools and other buildings. 9 April The International Committee of the Red Cross announced that its 130-cubic-metre relief shipment had arrived in Misrata. During the morning, pro-Gaddafi forces shelled the western outskirts of Ajdabiya. A Reuters correspondent reported hearing blasts and machine gun fire for around thirty minutes from the western boundary of the town, which is the gateway to Benghazi. Despite the UN no-fly-zone policy, the rebel side used an Mi-24 Hind attack helicopter. NATO warplanes intercepted a MiG-23 fighter jet operated by an opposition pilot, forcing him to land after he violated the no-fly zone. Several fighters were killed in Misrata; the clashes centered around the road leading to the port. NATO also attacked Gaddafi's forces in several locations. There was a meeting of the African Union High-Level Ad Hoc Committee on Libya. 10 April There was a visit of the African Union High Level Ad Hoc Committee on Libya to Tripoli. NATO announced that its Operation Unified Protector destroyed eleven tanks near Ajdabiya and fourteen near Misrata during the day. Anti-Gaddafi fighters said NATO airstrikes helped them hold Ajdabiya and drive Gaddafi's forces out during the weekend's attack. They said the loyalist incursion in western Ajdabiya lasted perhaps twenty-four hours and took thirteen civilian lives. The BBC News reported that an African Union mission had arrived in Libya to try to negotiate a ceasefire between the rebels and the loyalists. 11 April NATO announced that its Operation Unified Protector destroyed eleven tanks on 11 April, twenty-five tanks on 10 April and forty-nine since 9 April. A rebel spokesman in Misrata said they saw no evidence of a ceasefire. Instead, pro-Gaddafi forces seemed to be stepping up their attack on the city, using Grad rockets for the first time. Intense street fighting continued. The BBC News reported that the rebels rejected a peace plan presented by the African Union. The rebels said they were rejecting the truce because it did not include plans for Gaddafi to step down. 12 April On 12 April, rebel fighters claimed they took position west of the strategic town of Ajdabiya after clashes with pro-Gaddafi forces that left at least three dead. Pro-Gaddafi forces bombarded the western entrance to Ajdabiya, a witness reported. Eight blasts, apparently from artillery, were heard by the witness. Reuters reported that pro-Gaddafi forces had attacked Zintan. The attacks for the most part were random firing from north of the town. Reuters also reported that rebels in Misrata repelled two attacks from pro-Gaddafi forces. The fighting was on Tripoli Street, which leads to the city center, and Nak el Theqeel Street, which leads to the city's port. 13 April NATO conducted air strikes against munitions bunkers from Tripoli. The Pentagon said on 13 April that US fighter jets had conducted strikes on Libyan air defenses even after NATO took command over operations in Libya. US Defense Department spokesman Colonel David Lapan said US aircraft had been dropping bombs on Libyan air defenses following the full handover earlier this month to NATO control of the coalition operation there. US officials had previously said that the US was limiting its current role in the operation to support and patrolling of a no-fly zone over Libya. Lapan said the US remained in a support role. 14 April Pro-Gaddafi forces again pounded Misrata, targeting the port where a medical aid ship was expected to dock and destroying a cement factory and cargo containers, residents said. At least twenty people were killed and more than twenty others were wounded in a dawn attack at the port and a nearby residential area, a medical assistant told CNN. Pro-Gaddafi forces fired anti-aircraft guns at NATO warplanes from positions in central Tripoli. Coalition jets appeared to have increased the amount of sorties flown over the capital and could be heard flying over throughout the morning and early afternoon. A NATO strike on Tripoli damaged parts of a university complex. Smoke was also seen on a military base, which was the target of the attack. Several people were injured. There was fierce fighting at the front line, halfway between Brega and Ajdabiya. Agence France-Presse reported that a convoy of sixty opposition vehicles came under heavy artillery and mortar fire. They rushed reinforcements to the front line between their territory in the east and the mainly government-held west. The AFP correspondent counted some 100 rebel vehicles heading to the front. The leaders of the BRICS nations – Brazil, Russia, India, China and South Africa – believe the situation in Libya should be resolved through diplomatic means, Russian President Medvedev said. "Like the other BRICS nations, Russia is deeply concerned by events in Libya and the civilian deaths there", Medvedev said after a BRICS summit in Sanya, China. 15 April A ship with nearly 1,200 Asian and African migrants, many in bad shape after weeks with little food or water, left Misrata on Friday for Benghazi, the International Organization for Migration said. The chartered Greek vessel, Ionian Spirit, managed to unload 400 tonnes of aid supplies in Misrata overnight despite shelling on 14 April, the international aid agency said. NATO launched three new air strikes in and around Tripoli. They struck a missile battery and two other targets. Rebels claimed to have gained control over eastern Brega after heavy clashes. NATO strikes hit the area of al-Assah, about west of Tripoli. Pro-Gaddafi forces unleashed heavy shelling on Misrata, pushing troops and tanks into the rebel-held western city, a witness said. At the UN, Russia, China and India prevented sanctions from being imposed on Libyan state television. Late April (16–30 April) 16 April Al Jazeera reported that there was still fighting going on in Brega, and the previous-day's attack had been repelled with heavy casualties on the opposition side. Sirte and Al-Hira were target areas in air strikes by NATO. Rebel media claimed that: Pro-Gaddafi forces launched a Grad rocket attack on an industrial area in Misrata in the morning. Rebel forces advanced to the oil town of Brega, hoping to bring engineers to repair damage to the refineries and the terminal. The officer who reported this also attributed the advance to NATO airstrikes. However, by afternoon it was confirmed that the assault was repelled, with rebels not managing to enter the town. 17 April An Al Jazeera correspondent, from just outside Ajdabiya, reported that pro-Gaddafi troops had outflanked the body of rebel fighters on the edge of Brega and attacked Ajdabiya from the south. A sandstorm stopped NATO from targeting pro-Gaddafi forces, allowing the forces to overturn the rebel advances from the past few days. After a few hours of skirmishes, pro-Gaddafi forces forward units, engaging rebels on the western outskirts of Ajdabiya, pulled back. In the afternoon more than thirty vehicles of rebel reinforcements arrived in Ajdabiya. Reuters reported that nearly 4,000 Libyans had fled from the Western Mountains region to Tunisia in the past ten days. "They are in thousands. Hundreds arrived a few minutes ago. They are from several areas including Nalut, Yafran and Rajban," a Tunisian involved in helping Libyans arriving in the southern Tunisian border town of Dehiba said by phone. Residents fled the region because of shelling, fighting and threats by pro-Gaddafi forces. 18 April The humanitarian situation in Misrata worsened amid more shelling on the city and measures to get medical care were becoming increasingly desperate, an opposition spokesman said. The UK Mission to the UN announced that it would help 5,000 people trapped in Misrata "escape the besieged city and will provide vital medical assistance to those who remain in towns across western Libya," intending to evacuate foreign workers from the port. Three army officers, including a captain and a colonel defected to Tunisia by boat along with twenty Libyan civilians. In anticipation of UN approval for a ground mission in Misrata, the European Union prepared 1,000 troops for the mission. 19 April UNICEF announced that at least twenty children had been killed in weeks of fighting in Misrata. Many more were injured and traumatized by what they had seen and heard. The deaths and injuries were due to shrapnel from mortar and tanks and bullet wounds. The youngest victim was nine months old and most of the children killed in the past two weeks were younger than ten, a spokeswoman said. The UK said it was expanding its presence in Libya with military advisers and the EU said it was prepared to send troops for humanitarian assistance if requested by the UN. 20 April There was fighting on Misrata's Tripoli Street, but no shelling was taking place, an opposition fighter told Reuters. French President Nicolas Sarkozy promised NTC chairman Mustafa Abdul Jalil that France would intensify air strikes on Gaddafi's army. Meanwhile, Italian Defense Minister La Russa, said that Italy would send ten army advisers to aid the rebels in Libya, after France and the UK announced they were also sending officers. The US decided to give opposition forces "non-lethal assistance" goods worth US$25 million after assessing their capabilities and intentions. The goods are to be delivered from existing US stocks and comprise "medical supplies, uniforms, boots, tents, personal protective gear, radios" and food. Photojournalists Tim Hetherington and Chris Hondros died in Misrata while covering the civil war. NTC spokesman Ghoga told a news conference in Benghazi that the council was not opposed to foreign ground forces to protect a safe haven for civilians. "If that (protecting civilians) does not come except through ground forces that will ensure this safe haven, then there is no harm in that at all." Heavy mortar fire in Misrata killed at least ten and injured more than 100 others. Seven Libyan civilians and a Ukrainian doctor also died in the most recent fighting, doctors told Reuters, with about 120 people injured. Crown Prince Mohammed El Senussi spoke in front of the European Parliament calling for more support for Libya. 21 April Anti-Gaddafi forces expanded their control over areas of eastern and western parts of Misrata. Fighting subsided compared to previous weeks, while pro-Gaddafi forces continued to shell the city. Anti-Gaddafi forces took control of Wazzin on the Tunisian border after overrunning pro-Gaddafi troops. Roughly 100 pro-Gaddafi troops surrendered to Tunisian authorities as anti-Gaddafi forces overran Wazzin. Among the troops were thirteen army officers, including a general. The Tunisian new agency reported that the soldiers had "defected" upon entering Tunisia. However, all of the 100 soldiers returned from Tunisia into Libya the next day. US Defense Secretary Gates announced that the US had made two armed Predator drones available to the NATO-led bombing effort. 22 April Anti-Gaddafi forces managed to clear the center of Misrata from snipers, and secure some vital buildings amidst fighting over the city Pro-Gaddafi forces attacked an opposition-held oil pumping station, officials from an oil company said. One witness said eight people were killed in the attack. The attack could delay efforts to restart production from the rebel-controlled Sarir and Messla oil fields, suspended after an earlier raid two weeks ago. The extent of the damage was not yet known. A Qatari military aircraft arrived in Tunisia to build an advanced field hospital in the Al-Zahabiyah area, near the border with Libya, to treat people injured in Gaddafi brigades attacks. The plane carried a medical team, the first batch of equipment the hospital, 25 tonnes of medicines, medical supplies and ambulances. Pro-Gaddafi forces withdrew from Misrata and claim they will allow the tribal leaders to deal with anti-Gaddafi forces. Meanwhile, NATO continued its air strikes on Tripoli. 23 April NATO conducted air strikes on Tripoli in the early hours. Agence France-Presse reporters said they heard several explosions after planes flew over, followed by anti-aircraft fire. An opposition spokesman in Misrata said the city was freed from pro-Gaddafi's forces on 23 April after a siege of nearly two months. There was no independent confirmation, but pro-Gaddafi troops captured by anti-Gaddafi forces in Misrata said the army had been ordered to retreat and the Libyan government said earlier that local tribes would take over the battle from the army. "Misrata is free, the rebels have won. Of Gaddafi's forces, some are killed and others are running away," anti-Gaddafi-forces spokesman Gemal Salem told Reuters by telephone from the city. He added that while pro-Gaddafi forces had retreated from the city, they were still outside and would be in a position to bombard it. Anti-Gaddafi forces started to bring in supplies – through the border crossing at Wazzin – to the towns under siege in the Nafusa Mountains. However, it was confirmed that Yafran had been taken by pro-Gaddafi forces. During the pro-Gaddafi push into Yafran's center, seven anti-Gaddafi troops were killed and eleven were wounded. Rebels claimed that air-strikes on pro-Gaddafi's forces located on the Al Zaitoniya – Al Soihat road near Ajdabiya reportedly hit twenty-one army vehicles belonging to pro-Gaddafi's forces. However, there was no independent or NATO confirmation of the claim. On the same day, pro-Gaddafi forces also shelled al-Faluja, a small village near Ajdabiya. 24 April There was a heavy bombardment of Misrata, according to an opposition spokesman. "Gaddafi's brigades started random bombardment in the early hours of this morning. The bombardment is still going on. They targeted the city centre, mainly Tripoli Street, and three residential areas." NATO planes flew over, but there was no sign of air strikes. Pro-Gaddafi forces were stationed in a residential area in Brega in large numbers with around eight Grad batteries. When they tried to secretly surround Ajdabiya, NATO planes attacked them southwest of the city and in Mareer Qabes, an area northwest of the city. A NATO airstrike flattened a building inside Gaddafi's Bab al-Azizia compound early Monday, in what a press official from Gaddafi's government said was an attempt on the Libyan leader's life. 25 April Witnesses said that rocket attacks on Misrata killed at least thirty and wounded sixty. "There is very intense and random shelling on residential areas. Burned bodies are being brought into the hospital," an employee of dissident radio station in Misrata said. "The number of wounded is 60 and there were 30 martyrs. This is the toll for the past 12 hours," he added. Four people were killed as pro-Gaddafi forces attacked Zintan with Grad rockets, according to residents. Nine people were wounded in the attack which happened late on 24 April. Pro-Gaddafi forces fired between six and nine rockets, which crashed into homes. US officials said the attack on Tripoli, carried out by two Norwegian F16s was intended to degrade 'command and control', the goal was not to assassinate Gaddafi. However, had he been at the site, he would have been a "legitimate target". The BBC News reported that Italian Prime Minister Silvio Berlusconi approved the use of Italian aircraft for ground attack missions in Libya as part of Operation Unified Protector. The Voice of Russia reported that rebels were growing active in the loyalist-held towns of Yafran, Zliten and Abu Ruwayya, threatening loyalist communication lines into Misrata. Guerrilla attacks were also reported to be occurring in Zawiya, which was occupied by pro-Gaddafi forces in mid-March following the prolonged and bloody First Battle of Zawiya. The African Union held a consultative meeting on Libya at the ministerial level. 26 April The Peace and Security Council of the African Union (AU), at its 275th Meeting, situation in Libya. The African Union High Level Ad Hoc Committee on Libya Convenes its Fourth Meeting in Addis Ababa. Report of the Chairperson of the Commission on the Activities of the AU High Level Ad Hoc Committee on the Situation in Libya. Pro-Gaddafi forces reinforced their position around Brega. They dug in their long-range missile batteries to conceal them from NATO warplanes, according to an opposition commander. A ship chartered by the World Food Programme delivered more than 500 metric tonnes of food assistance, three ambulances, medical supplies and other relief items on behalf of humanitarian partners to Misrata. It also offloaded 150 metric tonnes of mixed food including pasta, rice, potatoes; 84 metric tonnes of bottled water; 12 metric tonnes of medicines and three ambulances on behalf of UNICEF, and other international aid organisations. This was the second delivery of the programme. It was reported that Russia had said it would not support any UN Security Council resolutions on Libya which could escalate the conflict in Libya. "If a resolution leads to a further escalation of a civil war by any means, including outside intervention, we will not be able to support this", Russian Foreign Minister Sergei Lavrov was quoted as saying by Interfax. Russian Prime Minister Putin had said that the coalition had no mandate to kill Gaddafi. Gaddafi's government was circumventing sanctions by importing gasoline from Italian refiner Saras, taking advantage of a loophole in the UN sanctions that permits purchases by companies not on a UN list of banned entities. In early April, a cargo ship sailed from Italy to La Shikhra, Tunisia, where it transferred its load onto a Libyan ship. US President Obama authorized US$25 million in non-lethal aid to anti-Gaddafi forces. 27 April Devastation mounts in Misrata after heavy shelling on port. Gaddafi forces have started firing mortar rounds at a district in the west of the city of Misrata, an opposition spokesman said on Wednesday. "They (pro-Gaddafi forces) attacked the Abu Rouia area, in the west of Misrata. They carried out intense bombardment, using mortars", the spokesman, called Safieddin, told Reuters from the city. NATO airstrikes killed eleven rebels and wounded two in Misrata. 28 April The evacuation ship Red Star finally managed to leave the port of Misrata for Benghazi. It had first been prevented from docking and then from leaving as Gaddafi forces continuously shelled the port. The ship was carrying nearly 1,000 migrant workers and 25 seriously injured. Loyalist forces re-captured Kufra, the main city in Libya's Kufra District region. There were no reports of casualties in the fighting for the town after the rebels put up only light resistance. Gaddafi forces re-captured the Wazzin border crossing with Tunisia after a swift advance in which they pushed the rebels back over the border into Tunisia where the fighting continued on the edge of the Tunisian border town of Dehiba. 29 April Libyan opposition fighters manned a checkpoint on the road leading out of Ajdabiya westwards toward government-held Brega on Thursday, signaling a possible stalemate in the ongoing conflict for lack of progress by either side. Opposition fighters erected barricades near the western entrance of Ajdabiya, the last major town before Benghazi and the key oil exporting terminal of Tobruk. Rebels claimed for a third time in a row to have re-taken the Wazzin border post after a night of heavy fighting. However, Al Jazeera confirmed that the Gaddafi green flag of Libya was still flying over the border post thus proving that government troops were still in control. Air strikes hit Gaddafi forces surrounding Zintan. Gaddafi forces have bombarded the opposition at the Wazin border crossing. A Tunisian told Reuters that very heavy bombardment was going on and they were trying to retake the crossing. Libyan state television threatened that any ship that tried to enter Misrata's port would be attacked, regardless of the justification. It also said that government attacks had rendered the port "non-functional". 30 April Fighting was shifting to the edges of Misrata, turning small village clinics into trauma centres, witnesses told Al Jazeera. Libyan state television says NATO had bombed close to the television building during Gaddafi's speech early on Saturday morning. Libyan state media and government officials claimed that Gaddafi's second youngest son, Saif al-Arab Gaddafi, was killed by a NATO air strike on the night of 30 April. Three of Gaddafi's grandchildren were allegedly killed in the same strike. Eyewitnesses in the embattled Libyan city of Misrata reported especially intense shelling by pro-Gaddafi forces overnight. "There's been continuous and heavy shelling for three hours straight," said one man, talking to CNN from about five kilometers from Misrata's city center. "There have been a lot of explosions." May 1 May Shells fired at the Libyan–Tunisian border near Wazzin land in Tunisia; no casualties were reported. Moscow strongly condemned NATO for bombing Gaddafi's complex, stating that its mandate was to "protect, not kill Libyans". NATO stated that the Libyan government had no evidence of Saif's death, and furthermore claimed that what the Libyan government had called a "residence" actually held an underground bunker which was used as a command and control center and that was the target. The Times of Malta reported its contacts in Misrata said soldiers loyal to Gaddafi had been issued gas masks and were wearing them in and around the city. The report came amidst fears Gaddafi could use chemical weapons to retaliate for his son's reported death. The British and Italian embassies came under attack, and this along with general unrest prompted the removal of the British ambassador and UN international staff. An estimated £130,000 worth of art at the UK mission was destroyed. Meanwhile, Gaddafi claimed the building hit in the Saturday strike was not a command center. 2 May Fighting took place in Western Libya, around Zintan. 3 May Rebels were low on money and hope for funding from Western powers. 4 May Agence France-Presse reported that loyalists attacked Misrata's port. 5 May US Secretary of State Hillary Clinton announced that the Obama administration hoped to free a small portion of the more than $30 billion it had frozen in Libyan assets to support Gaddafi opponents, pending approval from Congress. Spain, Denmark and the Netherlands denied rebel claims that their governments recognize the National Transitional Council in lieu of the Libyan Arab Jamahiriya. Turkey released a plan hoping for a ceasefire between the warring Libyans. The plan failed to materialize. Chairperson of the African Union Commission Dr. Jean Ping presented remarks during a meeting of the International Contact Group on Libya, held in Rome, Italy. 6 May Amnesty International accused Gaddafi's government of committing war crimes in Misrata by "deliberately targeting and killing civilians". 7 May Loyalist forces hit the main fuel depot in Misrata on Saturday. Forces loyal to Gaddafi attacked the remote eastern oil town of Jalu in the Libyan desert on Saturday, but the town remained in rebel hands, a rebel spokesman said. Forces loyal to Gaddafi bombed large fuel storage tanks in Misrata, destroying the tanks and causing a large fire. The BBC News reported that loyalist forces had put a number of mines in Misrata. 8 May The national forum of leaders and dignitaries of the Libyan tribes called for a debate to promulgate a new constitution and described as void all agreements and commitments proposed by the NTC with any foreign country. 11 May Libyan rebel forces captured Misrata airport after hours of overnight fighting, burning government tanks as loyalist forces retreated. The first shipment of non-lethal US aid to the Libyan opposition arrived in Benghazi. While human rights organizations warned of indiscriminate attacks in the Nafusa mountains, refugees spoke of kidnappings in the village of Ghezaya which was surrounded by Gaddafi's troops. Polish Foreign Affairs Ministers Radoslaw Sikorski, as a first member of the Libya Contact Group, was in Benghazi. During the visit, the Polish government provided medical aid to injured civilians. Sikorski met with representatives of the Interim Transitional National Council, including its chief Mustafa Abdul Jalil. 13 May Libyan rebels' foreign minister was set to request financial assistance during talks with US officials in Washington, D.C. Pressure was mounting on Gaddafi from within his stronghold in Tripoli, with increasing NATO airstrikes and worsening shortages of fuel and goods. An activist said on Friday that there had also been a wave of anti-government protests in several Tripoli neighborhoods that week. Gaddafi stated in an audio clip to NATO saying "you cannot kill me". A NATO airstrike at dawn killed at least 16 civilians, according to Libya state television, including a group of Muslim religious leaders who were holding a religious ceremony. NATO at first denied any knowledge of the incident, later admitting to striking a building labelled as a command and control bunker. 15 May Pro Gaddafi forces withdrew from the Misrata which had been under siege for almost three months. Opposition forces declared the battle for the city to be over and advanced to the Dafniya area to the west and the Tawergha area to the east, establishing a defensive perimeter around Misrata . 16 May Thousands of telecommunications workers vowed to act as human shields against NATO airstrikes targeting communications buildings, which had suffered $1.25 billion in damages. 17 May Xinhua News Agency reported that the Chief Prosecutor of the International Criminal Court had issued a request for an arrest warrant against Gaddafi for "crimes against humanity". Tunisia warned Libya for the second time to prevent its troops from firing towards Tunisia, threatening a diplomatic response from the U.N. Security Council. 18 May The Canadian government announced the acquisition of 1300 "smart bombs". 19 May The Economist reported that Jalal al-Digheily, a civilian, had replaced Omar El-Hariri as the NTC's defense minister. The precise date of Digheily's appointment was unclear. 20 May NATO airstrikes targeted several ships in Tripoli's port, a loyalist asset which had become an increasing threat to the waters off Misrata. Meanwhile, opposition forces battled with Gaddafi forces for control of two major highways in the Nafusa Mountains range. 21 May NATO launched airstrikes near Gaddafi's complex in Tripoli while a bus carrying foreign journalists was attacked by civilians armed with guns and knives while waiting to refuel at a Tripoli petrol station. 22 May Though loyalist forces continued to launch attacks and shell the city from its outskirts, opposition forces claimed liberation of Misrata, besieged since February by Gaddafi's troops. 23 May France and the UK announced plans to use attack helicopters in order to increase NATO's accuracy and ability to strike urban targets. The Canadian Press reported that Democrats and Republicans in the US Senate had agreed on a resolution allowing US military intervention in Libya. Turkey recognized the NTC as the sole legitimate representative of the Libyan people. 24 May More than twenty NATO airstrikes hit Tripoli near Gaddhafi's compound in the largest offensive attack on the capital since foreign intervention began. At least three people were reported killed and dozens wounded according to government spokesman Moussa Ibrahim. 25 May It was announced that South African President Jacob Zuma planned to visit Gaddafi in Tripoli next week to discuss a find a "lasting solution" to the current crisis there. He clarified earlier statements and insisted that the visit was not necessarily to find an "exit strategy" for Gaddafi, sowing doubt as to whether Gaddafi would concede to surrendering power, a demand the NTC had insisted was not negotiable. Khaled Kaim, Libya's deputy foreign minister allegedly claimed that the Gaddafi administration saw all options open in future negotiations and that Gaddafi might step down if terms were agreed upon by both sides. 26 May Gaddafi's prime minister sent letters to foreign governments looking to negotiate a ceasefire in Libya. French President Nicolas Sarkozy urged Gaddafi to step down as "all options are open." Libya's EU ambassador Hadeiba Hadi defected from the Gaddafi government along with his staff. The African Union High-Level Ad Hoc Committee on Libya convened its fifth meeting in Addis Ababa, Ethiopia. The chairperson of the African Union Commission met with the UN secretary-general while at the African Union Extraordinary Summit. There was a meeting of the High Level Ad Hoc Committee on Libya, at the Level of Heads of States. The United Kingdom announced plans to send four Apache helicopters to aid in the conflict. 27 May NATO allies rejected the most recent Libyan cease-fire offer. Russia joined many Western countries' demands for Gaddafi to leave power. NATO jets fired on and destroyed the guard towers surrounding Gaddafi's Tripolitanian military complex, Bab al-Azizia. A total of 151 sorties were conducted which included 45 strike sorties. Key hits were: in Tripoli: 1 Command & Control Facility. In the vicinity of Sirte: 1 Ammunition Storage Facility. In the vicinity of Mizda: 1 Ammunition Storage Facility. In the vicinity of Misrata: 1 Rocket Launcher, 2 Truck-Mounted Guns. In the vicinity of Hun: 1 Ammunition Storage Facility. In the vicinity of Zintan: 4 Surface-To-Air Missile Launchers. As of 27 May, a total of twenty NATO ships were actively patrolling the Central Mediterranean. 29 May More than 100 tribal and community leaders from Libya met with NTC members at a conference in Istanbul, Turkey on 28 and 29 May. The delegates called for an end to violence in Libya and the departure of Libyan leader Gaddafi and his sons. Most of the tribal leaders gathered were from the large Warfalla clan, based in the region around Bani Walid. 30 May As many as 120 of Gaddafi's military officials and soldiers were reported to have defected by Libyan UN Ambassador Abdurrahman Shalgam, a former Gaddafi official who joined the opposition. Among the high-ranking officials who defected (which included five generals, two colonels, and a major), General Oun Ali Oun was quoted as saying during a Roman press conference, "There is a lot of killing, genocide...violence against women. No wise, rational person with the minimum of dignity can do what we saw with our eyes and what he asked us to do." Al Jazeera reported that "unprecedented protests" occurred in Tripoli. Large demonstrations such as these had previously been limited by the heavy security presence, indicating the growing boldness of the populace of Tripoli. The protesters were eventually dispersed by live fire from security forces. 31 May Fighting broke out in the pro-Gaddafi stronghold of Bani Walid (see also 29 May) between rebels and Gaddafi soldiers, in which thirteen rebels and three loyalists were killed. Nine of the rebels were killed as prisoners of war, and government forces refused to give the bodies of the dead rebels back to their families. It was reported that the UN Humanitarian Coordinator for Libya said that the food supplies in parts of Libya that Gaddafi controlled might run out within weeks causing crisis among the general populace. The Libyan government claimed NATO air raids had killed 718 civilians and injured more than 4,000 since the bombing campaign to enforce a no-fly zone began. Continuation For later events, see Timeline of the Libyan civil war and military intervention (June – 15 August), a chronology from 1 June up to 15 August. See also 2011 military intervention in Libya 2011 Libyan rebel coastal offensive 2011 Nafusa Mountains Campaign Arab Spring List of modern conflicts in North Africa References External links Timelines of the First Libyan Civil War
1933920
https://en.wikipedia.org/wiki/List%20of%20Mozilla%20products
List of Mozilla products
The following is a list of Mozilla Foundation / Mozilla Corp. / Mozilla Messaging Inc. products. All products, unless specified, are cross-platform by design. Client applications Firefox Browser – An open-source web browser. Firefox Focus – A privacy-focused mobile web browser. Firefox Reality – A web browser optimized for virtual reality. Firefox for Android (also Firefox Daylight) – A web browser for mobile phones and smaller non-PC devices. Firefox Lockwise – A mobile application and integral part of Firefox Browser, for securely storing & syncing passwords. Firefox Monitor – An online service for alerting the user when their email addresses and passwords have been leaked in data breaches. Mozilla Thunderbird – An email and news client. Classilla   - A web browser made for PowerPC-based classic Macintosh operating systems. Mozilla VPN – A virtual private network client. SeaMonkey (formerly Mozilla Application Suite) – An Internet suite. ChatZilla – The IRC component, also available as a Firefox extension. Mozilla Calendar – Originally planned to be a calendar component for the suite; became the base of Mozilla Sunbird. Mozilla Composer – The HTML editor component. Mozilla Mail & Newsgroups – The email and news component. Components DOM Inspector – An inspector for DOM. Gecko – The layout engine. Necko – The network library. Rhino – The JavaScript engine written in Java programming language. Servo – A layout engine. SpiderMonkey – The JavaScript engine written in C programming language. Venkman – A JavaScript debugger. Development tools Bonsai – A web-based interface for the CVS. Bugzilla – A bugtracker. Rust (programming language) Skywriter – An extensible and interoperable web-based framework for code editing. Treeherder – A detective tool that allows developers to manage software builds and to correlate build failures on various platforms and configurations with particular code changes (Predecessors: TBPL and Tinderbox). API/Libraries Netscape Portable Runtime (NSPR) – A platform abstraction layer that makes operating systems appear the same. Network Security Services (NSS) – A set of libraries designed to support cross-platform development of security-enabled client and server applications. Network Security Services for Java (JSS) – A Java interface to NSS. Personal Security Manager (PSM) – A set of libraries that performs cryptographic operations on behalf of a client application. Other tools Client Customization Kit (CCK) – A set of tools that helps distributors customize and distribute the client. Mozbot – An IRC bot written in Perl. Mozilla Directory SDK – For writing applications which access, manage, and update the information stored in an LDAP directory. Mozilla Raindrop – Was an upcoming technology for sending messages. Mstone – A multi-protocol stress and performance measurement tool. Thimble – Mozilla's web-based educational code editor, part of the company's "Webmakers" project (Thimble was shut down in December 2019 and its projects were migrated to Glitch). Technologies JavaScript – The de facto client-side scripting programming language originated from Netscape Navigator. NPAPI – A plugin architecture originated from Netscape Navigator. XBL – A markup language for binding an XML element with its behavior(s). XPCOM – A software componentry model similar to COM. XPConnect – A binding between XPCOM and JavaScript. XPInstall – A technology for installing extensions. XTF – A framework for implementing new XML elements. XUL – A markup language for user interface. Abandoned Camino – A web browser intended for Mac OS X. ElectricalFire – A Java virtual machine using just-in-time compilation. Firefox OS – An open source operating system for smartphones and tablet computers mainly based on HTML5. Firefox Send – A web-based file sharing platform with end-to-end encryption and a link that automatically expires. Mariner – The improved layout engine based on code of Netscape Communicator. Minimo – A web browser for handheld devices. Mozilla Grendel – A mail and news client written in Java programming language. Mozilla Sunbird – A calendar client. Xena ("Javagator") – A communicator suite rewritten in Java programming language. References External links The Mozilla.org Projects List Mozilla Mozilla Mozilla
47136201
https://en.wikipedia.org/wiki/Visual%20Components
Visual Components
Visual Components is a developer of 3D simulation software for manufacturing. Visual Components software is used for applications including layout planning, production simulation, off-line programming and PLC verification. History Visual Components was founded 1999 in Helsinki, Finland. The company philosophy was to make manufacturing design and simulation technology easy to use and accessible to manufacturing organizations of all sizes. Visual Components’ first product was a layout configuration and visualization tool for JOT Automation, a Finnish supplier of automated test and assembly solutions. In 2003, KUKA released KUKA.Sim Layout, a KUKA branded robot simulation and off-line programming software, developed by Visual Components In 2005, Visual Components introduced the “3D” product family; its first family of branded Visual Components products. In 2016, Visual Components introduced the Visual Components 4.0 product family. Visual Components 4.0 was built on a new software architecture and platform. The architecture was designed to take advantage of modern computing hardware and 64-bit processors. The platform was designed with Open APIs to facilitate easy customization and development of 3rd party applications. In December 2017, KUKA announced the acquisition of Visual Components. Following the announcement, a statement was made by KUKA that Visual Components remain a hardware neutral simulation platform, and would continue to support and expand its list of robot models, currently 1,200+ models from 30+ robot brands. Products Essentials Allows users to design, build, and simulate production processes using ready-made components. It also includes features for robot teaching, PLC connectivity, virtual reality, and more. Professional Includes everything in Essentials and provides tools for component modeling. Premium Includes everything in Essentials and Professional, and provides tools for off-line programming and advanced robotics applications. Release history See also Simulation software Visualization software Discrete event simulation List of discrete event simulation software Robotics Simulation Software Off-line Programming (robotics) Programmable Logic Controller Industry 4.0 Automation References Simulation software Windows-only software Robotics simulation software
36240703
https://en.wikipedia.org/wiki/Outercurve%20Foundation
Outercurve Foundation
The Outercurve Foundation was an independent 501(c)(6) non-profit corporation founded by Microsoft. Its goal was to "enable the exchange of code and understanding among software companies and open source communities." They ran several software projects, some of which were connected to the .NET Framework. It was founded on September 10, 2009 as the CodePlex Foundation, led mostly by Microsoft employees and affiliates. The free software community considered the site subversive and suspected Microsoft's goal was to make people dependent on Windows and other software owned by Microsoft. The name Outercurve Foundation was adopted In September 2010. In November 2010 changes to by-laws were made and the board was expanded. Outercurve now serves the larger free and open-source community as a generalized low-overhead foundation for projects and organizations. Projects contributed by the group to the .NET Foundation include Nuget, Kudu and the ASP.NET AJAX library. Outercurve directors filed articles of dissolution to the Washington Secretary of State on April 22, 2017. See also Free software movement References 501(c)(6) nonprofit organizations Free software Microsoft Non-profit corporations
694222
https://en.wikipedia.org/wiki/Mach-O
Mach-O
Mach-O, short for Mach object file format, is a file format for executables, object code, shared libraries, dynamically-loaded code, and core dumps. It was developed to replace the a.out format. Mach-O is used by most systems based on the Mach kernel. NeXTSTEP, macOS, and iOS are examples of systems that use this format for native executables, libraries and object code. Mach-O file layout Each Mach-O file is made up of one Mach-O header, followed by a series of load commands, followed by one or more segments, each of which contains between 0 and 255 sections. Mach-O uses the REL relocation format to handle references to symbols. When looking up symbols Mach-O uses a two-level namespace that encodes each symbol into an 'object/symbol name' pair that is then linearly searched for, first by the object and then the symbol name. The basic structure—a list of variable-length "load commands" that reference pages of data elsewhere in the file—was also used in the executable file format for Accent. The Accent file format was in turn, based on an idea from Spice Lisp. Multi-architecture binaries Multiple Mach-O files can be combined in a multi-architecture binary. This allows a single binary file to contain code to support multiple instruction set architectures, for example for different generations and types of Apple devices, including different processor architectures like ARM and x86. Minimum OS version With the introduction of Mac OS X 10.6 platform the Mach-O file underwent a significant modification that causes binaries compiled on a computer running 10.6 or later to be (by default) executable only on computers running Mac OS X 10.6 or later. The difference stems from load commands that the dynamic linker, in previous Mac OS X versions, does not understand. Another significant change to the Mach-O format is the change in how the Link Edit tables (found in the __LINKEDIT section) function. In 10.6 these new Link Edit tables are compressed by removing unused and unneeded bits of information, however Mac OS X 10.5 and earlier cannot read this new Link Edit table format. To make backwards-compatible executables, the linker flag "-mmacosx-version-min=" can be used. Other implementations Some versions of NetBSD have had Mach-O support added as part of an implementation of binary compatibility, which allowed some Mac OS 10.3 binaries to be executed. For Linux, a Mach-O loader was written by Shinichiro Hamaji that can load 10.6 binaries. As a more extensive solution based on this loader, the Darling Project aims at providing a complete environment allowing macOS applications to run on Linux. For the Ruby programming language, the ruby-macho library provides an implementation of a Mach-O binary parser and editor. See also Fat binary Universal binary Mac transition to Intel processors Mac transition to Apple silicon Xcode ELF Comparison of executable file formats References External links OS X ABI Mach-O File Format Reference (Apple Inc.) Mach Object Files (NEXTSTEP documentation) Mach-O Dynamic Library Reference Mach-O linking and loading tricks MachOView Executable file formats MacOS development NeXT Mach (kernel)
8071599
https://en.wikipedia.org/wiki/Knowledge%20building%20community
Knowledge building community
A Knowledge Building Community (KBC) is a community in which the primary goal is knowledge creation rather than the construction of specific products or the completion of tasks. This notion is fundamental in Knowledge building theory. If knowledge is not realized for a community then we do not have knowledge building. Examples of KBCs are Classrooms Academic research teams Modern management companies Modern business R&D groups Wikipedia (Wikimedia Foundation and its volunteer editors) Theoretical background Knowledge Building is a theory developed by Carl Bereiter and Marlene Scardamalia that deals with the construction of knowledge. To build knowledge, learners should collaborate with one another and establish common goals, hold group discussions, and synthesize ideas in such a way that their knowledge of a topic advances from their current understanding. Knowledge building is outwardly focused on generating contributions that learners can give back to the community. Thus, the product of knowledge building should be an “artifact”—such as a publication, an illustrated model, or a theory—which other learners can then use to advance their own understanding of that subject. Among the most widely cited models characterizing the processes of collaborative knowledge building are those of Garrison, Anderson & Archer (2001) ; Gunawardena, Lowe, & Anderson (1997); Harasim (2002); and Xin (2002). Principles A school culture that fosters KBCs supports research, innovation, and high expectations for student achievement and participation. The twelve KB principles are continuously emphasized for and by teachers, students, and the school environment as a whole. 12 Knowledge Building Principles (Adapted from Scardamalia, 2002) Real Ideas and Authentic Problems – students identify real problems to study Improvable Ideas – ideas are improvable rather than accepted or rejected Epistemic Agency – students plan and engage in the process Collective Responsibility for Community Knowledge – all participants contribute to community goals Democratizing Knowledge – all participants are empowered; no knowledge have/have-not lines Idea Diversity – knowledge advancement depends on diversity of ideas, just as an ecosystem depends on biodiversity KB Discourse – problems progressively identified and addressed and new conceptualizations built Rise Above – by transcending trivialities and oversimplifications, students work towards more inclusive principles and higher level formulations of problems Constructive Use of Authoritative Sources – critically evaluate authoritative sources, don't just find “the answer” Pervasive KB – KB is a continuous process and can happen anywhere; it is not unique to the classroom Symmetric Knowledge Advance – “to give knowledge is to get knowledge”; there is no one expert Embedded and Transformative Assessment – integral to KB and helps to advance knowledge through identifying advances, problems, and gaps as work proceeds Use in classrooms The key focus for KBCs has been research on fostering KBCs in classrooms. Transforming a classroom into a KBC requires a significant shift in classroom norms and also in student and teacher identities. In this context, students define themselves through their personal learning goals and collaboratively pursue them. Students are viewed as intentional learners, working at the edge of each competence. Knowledge advances are not circumscribed by a teacher’s knowledge. Specific to a KBC is the objectification of knowledge artifacts. More precisely, if in a regular class, questions, ideas and discussions are personal and ethereal constructs, in a KBC classroom, they are public artifacts that have a permanent presence in a digital format, usually in the classroom database. For this reason, they can be analyzed, pointed at, talked about, and progressively refined over time. In order to be successful, the members of the knowledge building community should accomplish the following: Focus work on making advances to what the community already knows Embrace a general philosophy of inclusion Share openly what they do not understand, "What I need to understand..." or "What I need to know..." Respect each other's perspectives and tentative understandings Express disagreement in a constructive fashion Successful knowledge building initiatives at an elementary school “demonstrated sustained advancement of knowledge building as a principle-based, school-wide innovation.” Rather than focusing on a particular procedure or pedagogical approach to KBC, this study emphasized using a principle-based approach. This approach “defines core values and principles, leaving teachers the challenge of engaging in reflective interpretation, using discretionary judgment, and making adaptive classroom decisions to accommodate their different contexts and possibilities.” The KBC model immerses students in collaborative efforts to extend knowledge of the classroom community rather than just individual learning. Students take ownership of their learning and knowledge building by completing the tasks of goal setting, long-term planning, and impromptu process control, rather than following predetermined scripts for learning set by the teacher. Understanding, integrating, and sharing information are all vital components of sustained KBC models. Because learners at the post-secondary level are more cognitively mature than younger students, researchers have observed many successful implementations of KBCs in higher education. Stefano Cacciamani published a study in 2010 that explored how an instructor could make the shift from knowledge transition to knowledge construction. In the study, students enrolled in an online course initially modeled after a guided inquiry format, in which the teacher played a significant role in setting students' goals and monitoring their progress. This format eventually shifted to a self-organized inquiry model focused on knowledge construction, where students were expected to replace the teacher in setting class goals and monitoring their own progress. This fundamental change entailed the instructor’s concession of increasing responsibility to students. The study’s results indicate that students in the self-organized course engaged in critical thinking and displayed a greater effort to go beyond the given information than students in the guided inquiry course. To that effect, the authors concluded "...there is a suggestion in the early pilot work reported, that the self-organized inquiry approach favours a shift towards a knowledge construction perspective more so than the guided inquiry approach." Participants in a 2012 study by Philip used Knowledge Forum, an online environment designed to support knowledge building, to read, post, and respond to notes over the course of one semester. The author identified several likely indicators of the presence of a KBC, including: A high level of overall activity (at least 50% of students read 60 or more postings) Universal participation (online participation comprised 40% of the students’ final grades) The presence of reciprocal strong ties among students that indicate the potential for the transfer of complex information and “collective responsibility for the development of the community” Limitations Despite some promising results, efforts to implement KBCs in classrooms have also seen unintended consequences. Disparities in participation and maladaptive strategies aimed at reducing individual workload can lessen the effectiveness of KBCs for the community as a whole. Student relationships have a significant effect on participation patterns, and individual feelings of autonomy, relatedness, and intrinsic motivation all influence behaviors in KBCs. Diversity among peers, response lags, and deadlines can be negative influences on collaboration causing stress, frustration, and difficulties in negotiating meaning and consensus. Without effective guidance from teachers, students tend to focus on individualistic contributions related to their areas of personal interest rather than developing a shared understanding or pursuing a common line of inquiry. The collaborative technology itself could also be problematic. The tool of choice may be difficult to use and thus pose a slow learning curve for collaborators. In a review of fifty studies, Hew, Cheung, & Ng (2010) identified a series of "design quirks" common to these tools that frustrate students, such as the inaccessibility of other posts and features while in composition mode and the inability to edit and delete messages after posting. Unforeseen glitches in the technology can also add strain to the collaborative process. Computer support Scardamalia, Bereiter, and their team at the Center for Applied Cognitive Science at the University of Toronto developed a networked software system called Computer Supported Intentional Learning Environments, or CSILE, in the 1980s. In the 1990s, the system was re-engineered and released for commercial use as Knowledge Forum™. This software provides support for cognitive processes involved in knowledge building, making these visible to both the instructors and the students. Discussions, comments, knowledge artifacts, and knowledge advances are all visible and improvable within Knowledge Forum. The software also includes tools that can be used for formative (self-)assessment of the quantity of activity of students and the connections between them. Knowledge Forum has been used as a tool to support knowledge building communities in K-12 and higher-education settings. With the rapid growth of social networking software, a wide range of Web 2.0 tools with the capability to support community knowledge building have become available. Blogs, micro-blogs, wikis, and virtual worlds are just some of the Web 2.0 tools that have been implemented in K-12, higher education, and corporate settings. Likewise, learning management systems commonly contain tools for discussion that have been used as the platform for KBCs, though the alignment of these tools with the requirements of community knowledge building has been questioned. Knowledge mapping software can increase the number of hypotheses generated in group problem solving as well as the likelihood that a group will converge on a consensus. Real-time collaborative editing platforms, like Google Docs, enable multiple people to work on a document simultaneously. These tools allow "evolving documents" to be refined repeatedly by multiple collaborators, and come equipped with built-in chat functionality, commenting, and reviewing tools for facilitating knowledge construction through suggestion-making. See also Knowledge Forum Carl Bereiter Marlene Scardamalia Constructivism (learning theory) Social networking service References External links Knowledge Forum - A software for knowledge building Fle3 Learning Environment - Fle3 is an open source online tool for knowledge building Educational psychology
48540079
https://en.wikipedia.org/wiki/2015%E2%80%9316%20Troy%20Trojans%20men%27s%20basketball%20team
2015–16 Troy Trojans men's basketball team
The 2015–16 Troy Trojans men's basketball team represented Troy University during the 2015–16 NCAA Division I men's basketball season. The Trojans, led by third year head coach Phil Cunningham, played their home games at Trojan Arena and were members of the Sun Belt Conference. They finished the season 9–22, 4–16 in Sun Belt play to finish in last place. They failed to qualify for the Sun Belt Tournament. Roster Schedule |- !colspan=9 style="background:#960018; color:white;"| Exhibition |- !colspan=9 style="background:#960018; color:white;"| Regular season References Troy Trojans men's basketball seasons Troy 2015 in sports in Alabama 2016 in sports in Alabama
54281870
https://en.wikipedia.org/wiki/Diversity%20in%20open-source%20software
Diversity in open-source software
The open-source-software movement is commonly cited to have a diversity problem. In some ways it reflects that of the general gender disparity in computing, but in general is assumed to be even more severe. The same can be extended to the racial and ethnic diversity of the movement. "Diversity" in this article uses the academic Critical Theory definition. The topic has been and continues to be the subject of significant controversy within the open-source community. Background Open source software (OSS) is a non-traditional model of software development, where source code is created by a number of virtual volunteers and can be modified by other members of the community. The number of developers working on an OSS project can range from few to thousands but often in many projects, only developers deemed trustworthy by the project maintainers will have the privilege of making additions to the main repository. The software developed is freely available for use and the number of users varies from few to many millions. Over time, as OSS has continued to grow and offer new solutions to everyday problems, an increasingly diverse user base has continued to evolve. With time and growing usage of OSS projects as new solutions, brings an increasingly diverse user base. In comparison, since the creation of OSS in early 1990's, the community of OSS developers has remained dominated by young men. Obstacles for inclusion Hostile Culture A common criticism levelled at the open source community is that critiques of code contributed to projects have a tendency to become personal attacks. In GitHub's 2017 survey, 50% of the 5,500 respondents claimed they had witnessed toxic interactions while working on open-source projects, and that 18% of them had suffered through a negative interaction. Dismissive responses, conflict, and unwelcoming language were cited as the third, fourth, and sixth biggest problems with open-source respectively. An oft-repeated sentiment throughout the community is that conflict isn't widespread, but rather quite visible, due to the public nature of forums and mailing lists. The figures, however, make this idea questionable. Some members of the community have cited the community's toxicity as the main reason for open-source's diversity problem. Gender Bias In 2017, 3 million "pull requests" were examined from 330,000 GitHub users, 21,000 of those were women, and found code written by women to be accepted more often (78.6%) than code written by men (74.6%). In the cases of developers who were not insiders of a project and those whose gender was assumed identifiable by username or profile picture, code by men was approved at higher rates. The presence of gender bias and its effect on lack of gender diversity within OSS communities is believed true by the researches involved in this project. Gender diversity The more recent entering of women into the OSS movement has been suggested as the cause of their underrepresentation in the field; of all women who had contributed to OSS up until 2013, 38.45% of them began to do so from 2009 to 2013, in comparison to only 18.75% of men. The gender ratio in open source is even greater than the field-wide gender disparity in computing. This has been found by a number of surveys: A 2002 survey of 2,784 open-source-software developers found that 1.1% of them were women. A 2013 survey of 2,183 open-source contributors found that 81.4% were men and 10.4% were women. This survey included both software contributors and non-software contributors and women were much more likely to be non-software contributors. A 2017 survey of 5,500 contributors to projects on GitHub found that 95% of contributors were men and 3% were women. In 2015 Red Hat started the Women in Open Source Awards, whose are as follows: Racial and ethnic diversity Developers identifying as ethnic and national minorities, specifically Black people and Latinos are considered to be underrepresented in OSS. Of 5,500 Open Source developers surveyed in 2017, the representation of immigrants, from and to anywhere in the world, was 26%. While 37.8% of professional computer programmers in the U.S. workforce identified as ethnic or national minorities in 2017, only 16% did in Open Source. Sexual minority diversity A higher percentage of open-source contributors are members of a sexual minority. A 2017 survey of 5,500 GitHub contributors found that 7% were LGBT compared to 4% of the general population. A 2018 survey conducted by StackOverflow found that out of their sample of 100,000, 6.7% identified as LGBT+, and 0.9% as non-binary or trans. This suggests that the open-source community is roughly in line with the software industry's norm. Notable LGBT+ members of the open-source community include: Coraline Ada Ehmke, transgender, creator of the Contributor Covenant. Jon "Maddog" Hall, gay, member of The Linux Professional Institute's board and early advocate of Linux. Sage Sharp, non-binary, Linux kernel maintainer until 2015. Josh Simmons, bisexual, member of the Open Source Initiative board since 2016. Organizations LinuxChix is a women-oriented Linux community encouraging participation in Linux OSS by creating conflict-free and nurturing environments for women to do so. The diversity initiative, EquitableTech, targets minorities in OSS by offering skill training for Black and Latino computer science students with goal of increasing diversity in OSS. Several organisations have been set up with the intention of boosting the visibility of the open-source community's LGBT+ members. Examples include Trans*H4ck, Trans Code, and Lesbians Who Tech.Trans*H4CK was the first transgender "hackathon" with goal of bringing awareness to issues specific to the transgender community. After launching in 2013, it has increased visibility of transgender technologists and entrepreneurs in the technology industry. References Diversity in computing Open-source movement
148285
https://en.wikipedia.org/wiki/64-bit%20computing
64-bit computing
In computer architecture, 64-bit integers, memory addresses, or other data units are those that are 64-bit (8-octet) wide. Also, 64-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on processor registers, address buses, or data buses of that size. 64-bit microcomputers are computers in which 64-bit microprocessors are the norm. From the software perspective, 64-bit computing means the use of machine code with 64-bit virtual memory addresses. However, not all 64-bit instruction sets support full 64-bit virtual memory addresses; x86-64 and ARMv8, for example, support only 48 bits of virtual address, with the remaining 16 bits of the virtual address required to be all 0's or all 1's, and several 64-bit instruction sets support fewer than 64 bits of physical memory address. The term 64-bit describes a generation of computers in which 64-bit processors are the norm. 64 bits is a word size that defines certain classes of computer architecture, buses, memory, and CPUs and, by extension, the software that runs on them. 64-bit CPUs have been used in supercomputers since the 1970s (Cray-1, 1975) and in reduced instruction set computers (RISC) based workstations and servers since the early 1990s, notably the MIPS R4000, R8000, and R10000, the Digital Equipment Corporation (DEC) Alpha, the Sun Microsystems UltraSPARC, and the IBM RS64 and POWER3 and later IBM Power microprocessors. In 2003, 64-bit CPUs were introduced to the (formerly 32-bit) mainstream personal computer market in the form of x86-64 processors and the PowerPC G5, and were introduced in 2012 into the ARM architecture targeting smartphones and tablet computers, first sold on September 20, 2013, in the iPhone 5S powered by the ARMv8-A Apple A7 system on a chip (SoC). A 64-bit register can hold any of 264 (over 18 quintillion or 1.8×1019) different values. The range of integer values that can be stored in 64 bits depends on the integer representation used. With the two most common representations, the range is 0 through 18,446,744,073,709,551,615 (264 − 1) for representation as an (unsigned) binary number, and −9,223,372,036,854,775,808 (−263) through 9,223,372,036,854,775,807 (263 − 1) for representation as two's complement. Hence, a processor with 64-bit memory addresses can directly access 264 bytes (16 exbibytes or EiB) of byte-addressable memory. With no further qualification, a 64-bit computer architecture generally has integer and addressing processor registers that are 64 bits wide, allowing direct support for 64-bit data types and addresses. However, a CPU might have external data buses or address buses with different sizes from the registers, even larger (the 32-bit Pentium had a 64-bit data bus, for instance). The term may also refer to the size of low-level data types, such as 64-bit floating-point arithmetic numbers. Architectural implications Processor registers are typically divided into several groups: integer, floating-point, single instruction, multiple data (SIMD), control, and often special registers for address arithmetic which may have various uses and names such as address, index, or base registers. However, in modern designs, these functions are often performed by more general purpose integer registers. In most processors, only integer or address-registers can be used to address data in memory; the other types of registers cannot. The size of these registers therefore normally limits the amount of directly addressable memory, even if there are registers, such as floating-point registers, that are wider. Most high performance 32-bit and 64-bit processors (some notable exceptions are older or embedded ARM architecture (ARM) and 32-bit MIPS architecture (MIPS) CPUs) have integrated floating point hardware, which is often, but not always, based on 64-bit units of data. For example, although the x86/x87 architecture has instructions able to load and store 64-bit (and 32-bit) floating-point values in memory, the internal floating point data and register format is 80 bits wide, while the general-purpose registers are 32 bits wide. In contrast, the 64-bit Alpha family uses a 64-bit floating-point data and register format, and 64-bit integer registers. History Many computer instruction sets are designed so that a single integer register can store the memory address to any location in the computer's physical or virtual memory. Therefore, the total number of addresses to memory is often determined by the width of these registers. The IBM System/360 of the 1960s was an early 32-bit computer; it had 32-bit integer registers, although it only used the low order 24 bits of a word for addresses, resulting in a 16 MiB () address space. 32-bit superminicomputers, such as the DEC VAX, became common in the 1970s, and 32-bit microprocessors, such as the Motorola 68000 family and the 32-bit members of the x86 family starting with the Intel 80386, appeared in the mid-1980s, making 32 bits something of a de facto consensus as a convenient register size. A 32-bit address register meant that 232 addresses, or 4 GiB of random-access memory (RAM), could be referenced. When these architectures were devised, 4 GiB of memory was so far beyond the typical amounts (4 MiB) in installations, that this was considered to be enough headroom for addressing. 4.29 billion addresses were considered an appropriate size to work with for another important reason: 4.29 billion integers are enough to assign unique references to most entities in applications like databases. Some supercomputer architectures of the 1970s and 1980s, such as the Cray-1, used registers up to 64 bits wide, and supported 64-bit integer arithmetic, although they did not support 64-bit addressing. In the mid-1980s, Intel i860 development began culminating in a (too late for Windows NT) 1989 release; the i860 had 32-bit integer registers and 32-bit addressing, so it was not a fully 64-bit processor, although its graphics unit supported 64-bit integer arithmetic. However, 32 bits remained the norm until the early 1990s, when the continual reductions in the cost of memory led to installations with amounts of RAM approaching 4 GiB, and the use of virtual memory spaces exceeding the 4 GiB ceiling became desirable for handling certain types of problems. In response, MIPS and DEC developed 64-bit microprocessor architectures, initially for high-end workstation and server machines. By the mid-1990s, HAL Computer Systems, Sun Microsystems, IBM, Silicon Graphics, and Hewlett Packard had developed 64-bit architectures for their workstation and server systems. A notable exception to this trend were mainframes from IBM, which then used 32-bit data and 31-bit address sizes; the IBM mainframes did not include 64-bit processors until 2000. During the 1990s, several low-cost 64-bit microprocessors were used in consumer electronics and embedded applications. Notably, the Nintendo 64 and the PlayStation 2 had 64-bit microprocessors before their introduction in personal computers. High-end printers, network equipment, and industrial computers, also used 64-bit microprocessors, such as the Quantum Effect Devices R5000. 64-bit computing started to trickle down to the personal computer desktop from 2003 onward, when some models in Apple's Macintosh lines switched to PowerPC 970 processors (termed G5 by Apple), and Advanced Micro Devices (AMD) released its first 64-bit x86-64 processor. 64-bit data timeline 1961 IBM delivers the IBM 7030 Stretch supercomputer, which uses 64-bit data words and 32- or 64-bit instruction words. 1974 Control Data Corporation launches the CDC Star-100 vector supercomputer, which uses a 64-bit word architecture (prior CDC systems were based on a 60-bit architecture). International Computers Limited launches the ICL 2900 Series with 32-bit, 64-bit, and 128-bit two's complement integers; 64-bit and 128-bit floating point; 32-bit, 64-bit, and 128-bit packed decimal and a 128-bit accumulator register. The architecture has survived through a succession of ICL and Fujitsu machines. The latest is the Fujitsu Supernova, which emulates the original environment on 64-bit Intel processors. 1976 Cray Research delivers the first Cray-1 supercomputer, which is based on a 64-bit word architecture and will form the basis for later Cray vector supercomputers. 1983 Elxsi launches the Elxsi 6400 parallel minisupercomputer. The Elxsi architecture has 64-bit data registers but a 32-bit address space. 1989 Intel introduces the Intel i860 reduced instruction set computer (RISC) processor. Marketed as a "64-Bit Microprocessor", it had essentially a 32-bit architecture, enhanced with a 3D graphics unit capable of 64-bit integer operations. 1993 Atari introduces the Atari Jaguar video game console, which includes some 64-bit wide data paths in its architecture. 64-bit address timeline 1991 MIPS Computer Systems produces the first 64-bit microprocessor, the R4000, which implements the MIPS III architecture, the third revision of its MIPS architecture. The CPU is used in SGI graphics workstations starting with the IRIS Crimson. Kendall Square Research deliver their first KSR1 supercomputer, based on a proprietary 64-bit RISC processor architecture running OSF/1. 1992 Digital Equipment Corporation (DEC) introduces the pure 64-bit Alpha architecture which was born from the PRISM project. 1994 Intel announces plans for the 64-bit IA-64 architecture (jointly developed with Hewlett-Packard) as a successor to its 32-bit IA-32 processors. A 1998 to 1999 launch date was targeted. 1995 Sun launches a 64-bit SPARC processor, the UltraSPARC. Fujitsu-owned HAL Computer Systems launches workstations based on a 64-bit CPU, HAL's independently designed first-generation SPARC64. IBM releases the A10 and A30 microprocessors, the first 64-bit PowerPC AS processors. IBM also releases a 64-bit AS/400 system upgrade, which can convert the operating system, database and applications. 1996 Nintendo introduces the Nintendo 64 video game console, built around a low-cost variant of the MIPS R4000. HP releases the first implementation of its 64-bit PA-RISC 2.0 architecture, the PA-8000. 1998 IBM releases the POWER3 line of full-64-bit PowerPC/POWER processors. 1999 Intel releases the instruction set for the IA-64 architecture. AMD publicly discloses its set of 64-bit extensions to IA-32, called x86-64 (later branded AMD64). 2000 IBM ships its first 64-bit z/Architecture mainframe, the zSeries z900. z/Architecture is a 64-bit version of the 32-bit ESA/390 architecture, a descendant of the 32-bit System/360 architecture. 2001 Intel ships its IA-64 processor line, after repeated delays in getting to market. Now branded Itanium and targeting high-end servers, sales fail to meet expectations. 2003 AMD introduces its Opteron and Athlon 64 processor lines, based on its AMD64 architecture which is the first x86-based 64-bit processor architecture. Apple also ships the 64-bit "G5" PowerPC 970 CPU produced by IBM. Intel maintains that its Itanium chips would remain its only 64-bit processors. 2004 Intel, reacting to the market success of AMD, admits it has been developing a clone of the AMD64 extensions named IA-32e (later renamed EM64T, then yet again renamed to Intel 64). Intel ships updated versions of its Xeon and Pentium 4 processor families supporting the new 64-bit instruction set. VIA Technologies announces the Isaiah 64-bit processor. 2006 Sony, IBM, and Toshiba begin manufacturing the 64-bit Cell processor for use in the PlayStation 3, servers, workstations, and other appliances. Intel released Core 2 Duo as the first mainstream x86-64 processor for its mobile, desktop, and workstation line. Prior 64-bit extension processor lines were not widely available in the consumer retail market (most of 64-bit Pentium 4/D were OEM), 64-bit Pentium 4, Pentium D, and Celeron were not into mass production until late 2006 due to poor yield issue (most of good yield wafers were targeted at server and mainframe while mainstream still remain 130 nm 32-bit processor line until 2006) and soon became low end after Core 2 debuted. AMD released their first 64-bit mobile processor and manufactured in 90 nm. 2011 ARM Holdings announces ARMv8-A, the first 64-bit version of the ARM architecture. 2012 ARM Holdings announced their Cortex-A53 and Cortex-A57 cores, their first cores based on their 64-bit architecture, on 30 October 2012. 2013Apple announces the iPhone 5S, with the world's first 64-bit processor in a smartphone, which uses their A7 ARMv8-A-based system-on-a-chip. 2014Google announces the Nexus 9 tablet, the first Android device to run on the 64-bit Tegra K1 chip. 64-bit operating system timeline 1985 Cray releases UNICOS, the first 64-bit implementation of the Unix operating system. 1993 DEC releases the 64-bit DEC OSF/1 AXP Unix-like operating system (later renamed Tru64 UNIX) for its systems based on the Alpha architecture. 1994 Support for the R8000 processor is added by Silicon Graphics to the IRIX operating system in release 6.0. 1995 DEC releases OpenVMS 7.0, the first full 64-bit version of OpenVMS for Alpha. First 64-bit Linux distribution for the Alpha architecture is released. 1996 Support for the R4x00 processors in 64-bit mode is added by Silicon Graphics to the IRIX operating system in release 6.2. 1998 Sun releases Solaris 7, with full 64-bit UltraSPARC support. 2000 IBM releases z/OS, a 64-bit operating system descended from MVS, for the new zSeries 64-bit mainframes; 64-bit Linux on z Systems follows the CPU release almost immediately. 2001 Linux becomes the first OS kernel to fully support x86-64 (on a simulator, as no x86-64 processors had been released yet). 2001 Microsoft releases Windows XP 64-Bit Edition for the Itanium's IA-64 architecture; it could run 32-bit applications through an execution layer. 2003 Apple releases its Mac OS X 10.3 "Panther" operating system which adds support for native 64-bit integer arithmetic on PowerPC 970 processors. Several Linux distributions release with support for AMD64. FreeBSD releases with support for AMD64. 2005 On January 4, Microsoft discontinues Windows XP 64-Bit Edition, as no PCs with IA-64 processors had been available since the previous September, and announces that it is developing x86-64 versions of Windows to replace it. On January 31, Sun releases Solaris 10 with support for AMD64 and EM64T processors. On April 29, Apple releases Mac OS X 10.4 "Tiger" which provides limited support for 64-bit command-line applications on machines with PowerPC 970 processors; later versions for Intel-based Macs supported 64-bit command-line applications on Macs with EM64T processors. On April 30, Microsoft releases Windows XP Professional x64 Edition and Windows Server 2003 x64 Edition for AMD64 and EM64T processors. 2006 Microsoft releases Windows Vista, including a 64-bit version for AMD64/EM64T processors that retains 32-bit compatibility. In the 64-bit version, all Windows applications and components are 64-bit, although many also have their 32-bit versions included for compatibility with plug-ins. 2007 Apple releases Mac OS X 10.5 "Leopard", which fully supports 64-bit applications on machines with PowerPC 970 or EM64T processors. 2009 Microsoft releases Windows 7, which, like Windows Vista, includes a full 64-bit version for AMD64/Intel 64 processors; most new computers are loaded by default with a 64-bit version. Microsoft also releases Windows Server 2008 R2, which is the first 64-bit only server operating system. Apple releases Mac OS X 10.6, "Snow Leopard", which ships with a 64-bit kernel for AMD64/Intel64 processors, although only certain recent models of Apple computers will run the 64-bit kernel by default. Most applications bundled with Mac OS X 10.6 are now also 64-bit. 2011 Apple releases Mac OS X 10.7, "Lion", which runs the 64-bit kernel by default on supported machines. Older machines that are unable to run the 64-bit kernel run the 32-bit kernel, but, as with earlier releases, can still run 64-bit applications; Lion does not support machines with 32-bit processors. Nearly all applications bundled with Mac OS X 10.7 are now also 64-bit, including iTunes. 2012 Microsoft releases Windows 8 which supports UEFI Class 3 (UEFI without CSM) and Secure Boot. 2013 Apple releases iOS 7, which, on machines with AArch64 processors, has a 64-bit kernel that supports 64-bit applications. 2014 Google releases Android Lollipop, the first version of the Android operating system with support for 64-bit processors. 2017 Apple releases iOS 11, supporting only machines with AArch64 processors. It has a 64-bit kernel that only supports 64-bit applications. 32-bit applications are no longer compatible. 2019 Apple releases macOS 10.15 "Catalina", dropping support for 32-bit Intel applications. 2021 Google releases Android 12, dropping support for 32-bit applications. Microsoft releases Windows 11 on October 5, which only supports 64-bit systems, dropping support for IA-32 systems. Limits of processors In principle, a 64-bit microprocessor can address 16 EiB (, or about 18.4 exabytes) of memory. However, not all instruction sets, and not all processors implementing those instruction sets, support a full 64-bit virtual or physical address space. The x86-64 architecture () allows 48 bits for virtual memory and, for any given processor, up to 52 bits for physical memory. These limits allow memory sizes of 256 TiB () and 4 PiB (), respectively. A PC cannot currently contain 4 pebibytes of memory (due to the physical size of the memory chips), but AMD envisioned large servers, shared memory clusters, and other uses of physical address space that might approach this in the foreseeable future. Thus the 52-bit physical address provides ample room for expansion while not incurring the cost of implementing full 64-bit physical addresses. Similarly, the 48-bit virtual address space was designed to provide 65,536 (216) times the 32-bit limit of 4 GiB (), allowing room for later expansion and incurring no overhead of translating full 64-bit addresses. The Power ISA v3.0 allows 64 bits for an effective address, mapped to a segmented address with between 65 and 78 bits allowed, for virtual memory, and, for any given processor, up to 60 bits for physical memory. The Oracle SPARC Architecture 2015 allows 64 bits for virtual memory and, for any given processor, between 40 and 56 bits for physical memory. The ARM AArch64 Virtual Memory System Architecture allows 48 bits for virtual memory and, for any given processor, from 32 to 48 bits for physical memory. The DEC Alpha specification requires minimum of 43 bits of virtual memory address space (8 TiB) to be supported, and hardware need to check and trap if the remaining unsupported bits are zero (to support compatibility on future processors). Alpha 21064 supported 43 bits of virtual memory address space (8 TiB) and 34 bits of physical memory address space (16 GiB). Alpha 21164 supported 43 bits of virtual memory address space (8 TiB) and 40 bits of physical memory address space (1 TiB). Alpha 21264 supported user-configurable 43 or 48 bits of virtual memory address space (8 TiB or 256 TiB) and 44 bits of physical memory address space (16 TiB). 64-bit applications 32-bit vs 64-bit A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture, because that software has to manage the actual memory addressing hardware. Other software must also be ported to use the new abilities; older 32-bit software may be supported either by virtue of the 64-bit instruction set being a superset of the 32-bit instruction set, so that processors that support the 64-bit instruction set can also run code for the 32-bit instruction set, or through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor, as with some Itanium processors from Intel, which included an IA-32 processor core to run 32-bit x86 applications. The operating systems for those 64-bit architectures generally support both 32-bit and 64-bit applications. One significant exception to this is the IBM AS/400, software for which is compiled into a virtual instruction set architecture (ISA) called Technology Independent Machine Interface (TIMI); TIMI code is then translated to native machine code by low-level software before being executed. The translation software is all that must be rewritten to move the full OS and all software to a new platform, as when IBM transitioned the native instruction set for AS/400 from the older 32/48-bit IMPI to the newer 64-bit PowerPC-AS, codenamed Amazon. The IMPI instruction set was quite different from even 32-bit PowerPC, so this transition was even bigger than moving a given instruction set from 32 to 64 bits. On 64-bit hardware with x86-64 architecture (AMD64), most 32-bit operating systems and applications can run with no compatibility issues. While the larger address space of 64-bit architectures makes working with large data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate on whether they or their 32-bit compatibility modes will be faster than comparably priced 32-bit systems for other tasks. A compiled Java program can run on a 32- or 64-bit Java virtual machine with no modification. The lengths and precision of all the built-in types, such as char, short, int, long, float, and double, and the types that can be used as array indices, are specified by the standard and are not dependent on the underlying architecture. Java programs that run on a 64-bit Java virtual machine have access to a larger address space. Speed is not the only factor to consider in comparing 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering – for high-performance computing (HPC) – may be more suited to a 64-bit architecture when deployed appropriately. For this reason, 64-bit clusters have been widely deployed in large organizations, such as IBM, HP, and Microsoft. Summary: A 64-bit processor performs best with 64-bit software. A 64-bit processor may have backward compatibility, allowing it to run 32-bit application software for the 32-bit version of its instruction set, and may also support running 32-bit operating systems for the 32-bit version of its instruction set. A 32-bit processor is incompatible with 64-bit software. Pros and cons A common misconception is that 64-bit architectures are no better than 32-bit architectures unless the computer has more than 4 GiB of random-access memory. This is not entirely true: Some operating systems and certain hardware configurations limit the physical memory space to 3 GiB on IA-32 systems, due to much of the 3–4 GiB region being reserved for hardware addressing; see 3 GiB barrier; 64-bit architectures can address far more than 4 GiB. However, IA-32 processors from the Pentium Pro onward allow a 36-bit physical memory address space, using Physical Address Extension (PAE), which gives a 64 GiB physical address range, of which up to 62 GiB may be used by main memory; operating systems that support PAE may not be limited to 4 GiB of physical memory, even on IA-32 processors. However, drivers and other kernel mode software, more so older versions, may be incompatible with PAE; this has been cited as the reason for 32-bit versions of Microsoft Windows being limited to 4 GiB of physical RAM (although the validity of this explanation has been disputed). Some operating systems reserve portions of process address space for OS use, effectively reducing the total address space available for mapping memory for user programs. For instance, 32-bit Windows reserves 1 or 2 GiB (depending on the settings) of the total address space for the kernel, which leaves only 3 or 2 GiB (respectively) of the address space available for user mode. This limit is much higher on 64-bit operating systems. Memory-mapped files are becoming more difficult to implement in 32-bit architectures as files of over 4 GiB become more common; such large files cannot be memory-mapped easily to 32-bit architectures, as only part of the file can be mapped into the address space at a time, and to access such a file by memory mapping, the parts mapped must be swapped into and out of the address space as needed. This is a problem, as memory mapping, if properly implemented by the OS, is one of the most efficient disk-to-memory methods. Some 64-bit programs, such as encoders, decoders and encryption software, can benefit greatly from 64-bit registers, while the performance of other programs, such as 3D graphics-oriented ones, remains unaffected when switching from a 32-bit to a 64-bit environment. Some 64-bit architectures, such as x86-64 and AArch64, support more general-purpose registers than their 32-bit counterparts (although this is not due specifically to the word length). This leads to a significant speed increase for tight loops since the processor does not have to fetch data from the cache or main memory if the data can fit in the available registers. Example in C: int a, b, c, d, e; for (a = 0; a < 100; a++) { b = a; c = b; d = c; e = d; } This code first creates 5 values: a, b, c, d and e; and then puts them in a loop. During the loop, this code changes the value of b to the value of a, the value of c to the value of b, the value of d to the value of c and the value of e to the value of d. This has the same effect as changing all the values to a. If a processor can keep only two or three values or variables in registers, it would need to move some values between memory and registers to be able to process variables d and e also; this is a process that takes many CPU cycles. A processor that can hold all values and variables in registers can loop through them with no need to move data between registers and memory for each iteration. This behavior can easily be compared with virtual memory, although any effects are contingent on the compiler. The main disadvantage of 64-bit architectures is that, relative to 32-bit architectures, the same data occupies more space in memory (due to longer pointers and possibly other types, and alignment padding). This increases the memory requirements of a given process and can have implications for efficient processor cache use. Maintaining a partial 32-bit model is one way to handle this, and is in general reasonably effective. For example, the z/OS operating system takes this approach, requiring program code to reside in 31-bit address spaces (the high order bit is not used in address calculation on the underlying hardware platform) while data objects can optionally reside in 64-bit regions. Not all such applications require a large address space or manipulate 64-bit data items, so these applications do not benefit from these features. Software availability x86-based 64-bit systems sometimes lack equivalents of software that is written for 32-bit architectures. The most severe problem in Microsoft Windows is incompatible device drivers for obsolete hardware. Most 32-bit application software can run on a 64-bit operating system in a compatibility mode, also termed an emulation mode, e.g., Microsoft WoW64 Technology for IA-64 and AMD64. The 64-bit Windows Native Mode driver environment runs atop 64-bit , which cannot call 32-bit Win32 subsystem code (often devices whose actual hardware function is emulated in user mode software, like Winprinters). Because 64-bit drivers for most devices were unavailable until early 2007 (Vista x64), using a 64-bit version of Windows was considered a challenge. However, the trend has since moved toward 64-bit computing, more so as memory prices dropped and the use of more than 4 GiB of RAM increased. Most manufacturers started to provide both 32-bit and 64-bit drivers for new devices, so unavailability of 64-bit drivers ceased to be a problem. 64-bit drivers were not provided for many older devices, which could consequently not be used in 64-bit systems. Driver compatibility was less of a problem with open-source drivers, as 32-bit ones could be modified for 64-bit use. Support for hardware made before early 2007, was problematic for open-source platforms, due to the relatively small number of users. 64-bit versions of Windows cannot run 16-bit software. However, most 32-bit applications will work well. 64-bit users are forced to install a virtual machine of a 16- or 32-bit operating system to run 16-bit applications. Mac OS X 10.4 "Tiger" and Mac OS X 10.5 "Leopard" had only a 32-bit kernel, but they can run 64-bit user-mode code on 64-bit processors. Mac OS X 10.6 "Snow Leopard" had both 32- and 64-bit kernels, and, on most Macs, used the 32-bit kernel even on 64-bit processors. This allowed those Macs to support 64-bit processes while still supporting 32-bit device drivers; although not 64-bit drivers and performance advantages that can come with them. Mac OS X 10.7 "Lion" ran with a 64-bit kernel on more Macs, and OS X 10.8 "Mountain Lion" and later macOS releases only have a 64-bit kernel. On systems with 64-bit processors, both the 32- and 64-bit macOS kernels can run 32-bit user-mode code, and all versions of macOS up to macOS Mojave (10.14) include 32-bit versions of libraries that 32-bit applications would use, so 32-bit user-mode software for macOS will run on those systems. The 32-bit versions of libraries have been removed by Apple in macOS Catalina (10.15). Linux and most other Unix-like operating systems, and the C and C++ toolchains for them, have supported 64-bit processors for many years. Many applications and libraries for those platforms are open-source software, written in C and C++, so that if they are 64-bit-safe, they can be compiled into 64-bit versions. This source-based distribution model, with an emphasis on frequent releases, makes availability of application software for those operating systems less of an issue. 64-bit data models In 32-bit programs, pointers and data types such as integers generally have the same length. This is not necessarily true on 64-bit machines. Mixing data types in programming languages such as C and its descendants such as C++ and Objective-C may thus work on 32-bit implementations but not on 64-bit implementations. In many programming environments for C and C-derived languages on 64-bit machines, int variables are still 32 bits wide, but long integers and pointers are 64 bits wide. These are described as having an LP64 data model, which is an abbreviation of "Long, Pointer, 64". Other models are the ILP64 data model in which all three data types are 64 bits wide, and even the SILP64 model where short integers are also 64 bits wide. However, in most cases the modifications required are relatively minor and straightforward, and many well-written programs can simply be recompiled for the new environment with no changes. Another alternative is the LLP64 model, which maintains compatibility with 32-bit code by leaving both int and long as 32-bit. LL refers to the long long integer type, which is at least 64 bits on all platforms, including 32-bit environments. There are also systems with 64-bit processors using an ILP32 data model, with the addition of 64-bit long long integers; this is also used on many platforms with 32-bit processors. This model reduces code size and the size of data structures containing pointers, at the cost of a much smaller address space, a good choice for some embedded systems. For instruction sets such as x86 and ARM in which the 64-bit version of the instruction set has more registers than does the 32-bit version, it provides access to the additional registers without the space penalty. It is common in 64-bit RISC machines, explored in x86 as x32 ABI, and has recently been used in the Apple Watch Series 4 and 5. Many 64-bit platforms today use an LP64 model (including Solaris, AIX, HP-UX, Linux, macOS, BSD, and IBM z/OS). Microsoft Windows uses an LLP64 model. The disadvantage of the LP64 model is that storing a long into an int may truncate. On the other hand, converting a pointer to a long will “work” in LP64. In the LLP64 model, the reverse is true. These are not problems which affect fully standard-compliant code, but code is often written with implicit assumptions about the widths of data types. C code should prefer (u)intptr_t instead of long when casting pointers into integer objects. A programming model is a choice made to suit a given compiler, and several can coexist on the same OS. However, the programming model chosen as the primary model for the OS application programming interface (API) typically dominates. Another consideration is the data model used for device drivers. Drivers make up the majority of the operating system code in most modern operating systems (although many may not be loaded when the operating system is running). Many drivers use pointers heavily to manipulate data, and in some cases have to load pointers of a certain size into the hardware they support for direct memory access (DMA). As an example, a driver for a 32-bit PCI device asking the device to DMA data into upper areas of a 64-bit machine's memory could not satisfy requests from the operating system to load data from the device to memory above the 4 gibibyte barrier, because the pointers for those addresses would not fit into the DMA registers of the device. This problem is solved by having the OS take the memory restrictions of the device into account when generating requests to drivers for DMA, or by using an input–output memory management unit (IOMMU). Current 64-bit architectures , 64-bit architectures for which processors are being manufactured include: The 64-bit extension created by Advanced Micro Devices (AMD) to Intel's x86 architecture (later licensed by Intel); commonly termed x86-64, AMD64, or x64: AMD's AMD64 extensions (used in Athlon 64, Opteron, Sempron, Turion 64, Phenom, Athlon II, Phenom II, APU, FX, Ryzen, and Epyc processors) Intel's Intel 64 extensions, used in Intel Core 2/i3/i5/i7/i9, some Atom, and newer Celeron, Pentium, and Xeon processors Intel's K1OM architecture, a variant of Intel 64 with no CMOV, MMX, and SSE instructions, used in first-generation Xeon Phi (Knights Corner) coprocessors, binary incompatible with x86-64 programs VIA Technologies' 64-bit extensions, used in the VIA Nano processors IBM's PowerPC/Power ISA: IBM's POWER4, POWER5, POWER6, POWER7, POWER8, POWER9, and IBM A2 processors SPARC V9 architecture: Oracle's M8 and S7 processors Fujitsu's SPARC64 XII and SPARC64 XIfx processors IBM's z/Architecture, a 64-bit version of the ESA/390 architecture, used in IBM's eServer zSeries and System z mainframes: IBM z13 and z14 Hitachi AP8000E HP–Intel's IA-64 architecture: Intel's Itanium processors (discontinued) MIPS Technologies' MIPS64 architecture ARM Holdings' AArch64 architecture Elbrus architecture: Elbrus-8S NEC SX architecture SX-Aurora TSUBASA RISC-V Most architectures of 64 bits that are derived from the same architecture of 32 bits can execute code written for the 32-bit versions natively, with no performance penalty. This kind of support is commonly called bi-arch support or more generally multi-arch support. See also Computer memory References External links 64-bit Transition Guide, Mac Developer Library Lessons on development of 64-bit C/C++ applications 64-Bit Programming Models: Why LP64? AMD64 (EM64T) architecture 64-bit computers Data unit
44009
https://en.wikipedia.org/wiki/MiniDisc
MiniDisc
MiniDisc (MD) is an erasable magneto-optical disc-based data storage format offering a capacity of 60, 74, and later, 80 minutes of digitized audio. Sony announced the MiniDisc in September 1992 and released it in November of that year for sale in Japan and in December in Europe, North America, and other countries. The music format was based on ATRAC audio data compression, Sony's own proprietary compression code. Its successor, Hi-MD, would later introduce the option of linear PCM digital recording to meet audio quality comparable to that of a compact disc. MiniDiscs were very popular in Japan and found moderate success in Europe. By March 2011 Sony had sold 22 million MD players. Sony has ceased development of MD devices, with the last of the players sold by March 2013. Market history In 1983, just a year after the introduction of the Compact Disc, Kees Schouhamer Immink and Joseph Braat presented the first experiments with erasable magneto-optical Compact Discs during the 73rd AES Convention in Eindhoven. It took almost 10 years however before their idea was commercialized. Sony's MiniDisc was one of two rival digital systems, both introduced in 1992, that were targeted as replacements for the Philips Compact Cassette analog audio tape system: the other was the Digital Compact Cassette (DCC), created by Philips and Matsushita (now Panasonic). Sony had originally intended the Digital Audio Tape (DAT) to be the dominant home digital audio recording format, replacing the analog cassette. Because of technical delays, the DAT was not launched until 1989, and by then the U.S. dollar had fallen so far against the yen that the introductory DAT machine Sony had intended to market for about $400 in the late 1980s now had to retail for $800 or even $1000 to break even, putting it out of reach of most users. Relegating the DAT to professional use, Sony set to work to come up with a simpler, more economical digital home format. By the time Sony came up with the MiniDisc in late 1992, Philips had introduced a competing system, DCC, on a magnetic tape cassette. This created marketing confusion very similar to the Betamax versus VHS battle of the late 1970s and early 1980s. Sony attempted to license MD technology to other manufacturers, with JVC, Sharp, Pioneer, Panasonic and others all producing their own MD systems. However, non-Sony machines were not widely available in North America, and companies such as Technics and Radio Shack tended to promote DCC instead. Despite having a loyal customer base largely of musicians and audio enthusiasts, the MiniDisc met with only limited success in the United States. It was relatively popular in Japan and the United Kingdom during the 1990s, but did not enjoy comparable sales in other world markets. Since then, recordable CDs, flash memory and HDD and solid-state-based digital audio players such as iPods have become increasingly popular as playback devices. The initial low uptake of MiniDisc was attributed to the small number of pre-recorded albums available on MD as relatively few record labels embraced the format. The initial high cost of equipment and blank media was also a factor. Mains-powered Hi-fi MiniDisc player/recorders never got into the lower price ranges, and most consumers had to connect a portable machine to the hi-fi in order to record. This inconvenience contrasted with the earlier common use of cassette decks as a standard part of an ordinary hi-fi set-up. MiniDisc technology was faced with new competition from the recordable compact disc (CD-R) when it became more affordable to consumers beginning around 1996. Initially, Sony believed that it would take around a decade for CD-R prices to become affordable – the cost of a typical blank CD-R disc was around $12 in 1994 – but CD-R prices fell much more rapidly than envisioned, to the point where CD-R blanks sank below $1 per disc by the late 1990s, compared to at least $2 for the cheapest 80-minute MiniDisc blanks. The biggest competition for MiniDisc came from the emergence of MP3 players. With the Diamond Rio player in 1998 and the Apple iPod in 2001, the mass market began to eschew physical media in favor of more convenient file-based systems. By 2007, because of the waning popularity of the format and the increasing popularity of solid-state MP3 players, Sony was producing only one model, the Hi-MD MZ-RH1, also available as the MZ-M200 in North America packaged with a Sony microphone and limited Apple Macintosh software support. The introduction of the MZ-RH1 allowed users to freely move uncompressed digital recordings back and forth from the MiniDisc to a computer without the copyright protection limitations previously imposed upon the NetMD series. This allowed the MiniDisc to better compete with HD recorders and MP3 players. However, most pro users like broadcasters and news reporters had already abandoned MiniDisc in favor of solid-state recorders, because of their long recording times, open digital content sharing, high-quality digital recording capabilities and reliable, lightweight design. On 7 July 2011, Sony announced that it would no longer ship MiniDisc Walkman products as of September 2011, effectively killing the format. On 1 February 2013, Sony issued a press release on the Nikkei stock exchange that it would cease shipment of all MD devices, with last of the players to be sold in March 2013. However, it would continue to sell blank discs and offer repair services. Other manufacturers continued to release their own MiniDisc players long after Sony stopped with TEAC & TASCAM producing new decks up until 2020 when both its consumer and professional products, TEAC MD-70CD and TASCAM MD-CD1MKIII, ended production. Design Physical characteristics The disc is permanently housed in a cartridge (68×72×5 mm) with a sliding door, similar to the casing of a 3.5" floppy disk. This shutter is opened automatically by a mechanism upon insertion into a drive. The audio discs can either be recordable (blank) or premastered. Recordable MiniDiscs use a magneto-optical system to write data; a laser heats one side of the disc to its Curie point, making the material in the disc susceptible to a magnetic field; a magnetic head on the other side of the disc alters the polarity of the heated area, recording the digital data onto the disk. Playback is accomplished with the laser alone: taking advantage of the magneto-optic Kerr effect; the player senses the polarization of the reflected light and thus interprets a 1 or a 0. Recordable MDs can be rerecorded repeatedly; Sony claims up to one million times. By May 2005, there were 60-minute, 74-minute and 80-minute discs available. 60-minute blanks, which were widely available in the early years of the format's introduction, were phased out and are now rarely seen. MiniDiscs use a mastering process and optical playback system that is very similar to CDs. The recorded signal of the premastered pits and of the recordable MD are also very similar. Eight-to-Fourteen Modulation (EFM) and a modification of CD's CIRC code, called Advanced Cross Interleaved Reed-Solomon Code (ACIRC) are employed. Differences from cassette and CDs MiniDiscs use rewritable magneto-optical storage to store the data. Unlike the DCC or the analog Compact Cassette, the disc is a random-access medium, making seek time very fast. MiniDiscs can be edited very quickly even on portable machines. Tracks can be split, combined, moved or deleted with ease either on the player or uploaded to a PC with Sony's SonicStage V4.3 software and edited there. Transferring data from an MD unit to a non-Windows machine can only be done in real time, preferably via optical I/O, by connecting the audio out port of the MD to an available audio in port of the computer. With the release of the Hi-MD format, Sony began to release Macintosh compatible software. However, the Mac compatible software was still not compatible with legacy MD formats (SP, LP2, LP4). This means that using an MD recorded on a legacy unit or in a legacy format still requires a Windows machine for non-real time transfers. At the beginning of the disc there is a table of contents (TOC, also known as the System File area of the disc), which stores the start positions of the various tracks, as well as meta information (title, artist) about them and free blocks. Unlike the conventional cassette, a recorded song does not need to be stored as one piece on the disk, it can be stored in several fragments, similar to a hard drive. Early MiniDisc equipment had a fragment granularity of 4 seconds of audio. Fragments smaller than the granularity are not kept track of, which may lead to the usable capacity of a disc actually shrinking. No means of defragmenting the disc is provided in consumer grade equipment. All consumer-grade MiniDisc devices feature a copy-protection scheme known as Serial Copy Management System. An unprotected disc or song can be copied without limit, but the copies can no longer be digitally copied. However, as a concession to this the most recent Hi-MD players can upload to PC a digitally recorded file which can subsequently be resaved as a WAV (PCM) file and thus replicated. Audio data compression The digitally encoded audio signal on a MiniDisc has traditionally been data-compressed using the ATRAC format (Adaptive TRansform Acoustic Coding). This is in fact a 'psychoacoustic' data reduction system which omits some of the musical content. It is claimed by Sony that the content that is omitted is inaudible regardless. Some original sounds have been known to defeat ATRAC which typically introduces a crackle or whistle onto the data stream. ATRAC was devised for MiniDisc to support the same amount of audio on a CD. ATRAC reduces the 1.4 Mbit/s of a CD to a 292 kbit/s data stream, roughly a 5:1 reduction. ATRAC was also used on nearly all flash memory Walkman devices until the 8 series. Sony's ATRAC codec differs from uncompressed PCM in that it is a psychoacoustic lossy audio data reduction scheme and is such that the recorded signal does not require decompression on replay. Although it is intended that the reproduced signal may sound nearly identical to the original as far as the listener is concerned, it differs sufficiently that listening on a high quality audio system will betray the difference – other true compression schemes generally share this characteristic to a greater or lesser degree. There have been four versions of the ATRAC data reduction system, each claimed (by Sony) to more accurately reflect the original audio. Early version players are guaranteed to play later version ATRAC audio because there is no processing required for replay. Version 1 could only be copied on consumer equipment three or four times before artifacts became objectionable, as the ATRAC on the recording machine attempts to data reduce the already reduced signal. By version 4, the potential number of generations of copy had increased to around 15 to 20 depending on audio content. The latest versions of Sony's ATRAC are ATRAC3 and ATRAC3plus, both of which are true lossy compression schemes and both require decompression on replay. Original ATRAC3 at 132 kbit/s (also known as ATRAC-LP2 mode) is the format that used to be used by Sony's now-defunct Connect audio download store. ATRAC3plus was not used in order to retain backwards compatibility with earlier NetMD players. In the MiniDisc's last progression, Hi-MD, uncompressed CD-quality linear PCM audio recording and playback is offered, placing Hi-MD on a par with CD-quality audio. Hi-MD also supports both ATRAC3 and ATRAC3plus in varying bitrates, but not the original ATRAC. Anti-skip MiniDisc has a feature that prevents disc skipping under all but the most extreme conditions. Older CD players had once been a source of annoyance to users as they were prone to mis-tracking from vibration and shock. MiniDisc solved this problem by reading the data into a memory buffer at a higher speed than was required before being read out to the digital-to-analog converter at the standard rate required by the format. The size of the buffer varies by model. If the MiniDisc player were bumped, playback could continue unimpeded while the laser repositioned itself to continue reading data from the disc. This feature allows the player to stop the spindle motor for long periods, increasing battery life. A buffer of at least six seconds is required on all MiniDisc players, be they portable or stationary full-sized units. This is needed to ensure uninterrupted playback in the presence of disc fragmentation. Operation The data structure and operation of a MiniDisc is similar to that of a computer's hard disk drive. The bulk of the disc contains data pertaining to the music itself, and a small section contains the table of contents (TOC), providing the playback device with vital information about the number and location of tracks on the disc. Tracks and discs can be named. Tracks may easily be added, erased, combined and divided, and their preferred order of playback modified. Erased tracks are not actually erased at the time, but are marked so. When a disc becomes full, the recorder can simply slot track data into sections where erased tracks reside. This can lead to some fragmentation but unless many erasures and replacements are performed, the only likely problem is excessive searching, reducing battery life. The data structure of the MiniDisc, where music is recorded in a single stream of bytes while the TOC contains pointers to track positions, allows for gapless playback of music, something which the majority of competing portable players, including most MP3 players, fail to implement properly. Notable exceptions are CD players, as well as all recent iPods. At the end of recording, after the "Stop" button has been pressed, the MiniDisc may continue to write music data for a few seconds from its memory buffers. During this time, it may display a message ("Data Save", on at least some models) and the case will not open. After the audio data is written out, the final step is to write the TOC track denoting the start and endpoints of the recorded data. Sony notes in the manual that one should not interrupt the power or expose the unit to undue physical shock during this period. Copy protection All MiniDisc-recorders used the SCMS copy protection system which uses two bits in the S/PDIF digital audio stream and on disc to differentiate between "protected" vs. "unprotected" audio, and between "original" vs. "copy": Recording digitally from a source marked "protected" and "original" (produced by a prerecorded MD or an MD that recorded an analogue input) was allowed, but the recorder would change the "original" bit to the "copy" state on the disc to prevent further copying of the copy. A CD imported via a digital connection does not have the SCMS bits (as the format predates it), but the recording MD recorder treats any signal where the SCMS bits are missing as protected and original. The MD copy, therefore, cannot be further digitally copied. Recording digitally from a source marked "protected" and "copy" was not allowed: an error message would be shown on the display. Recording digitally from a source marked "unprotected" was also allowed; the "original/copy" marker was ignored and left unchanged. Recording from an analogue source resulted in a disc marked "protected" and "original" allowing one further copy to be made (this contrasts with the SCMS on the Digital Compact Cassette where analogue recording was marked as "unprotected"). Of those recorder/players that could be connected to a PC via a USB lead, although it was possible to transfer audio from the PC to the MiniDisc recorder, for many years it was not possible to transfer audio the other way. This restriction existed in both the SonicStage software and in the MiniDisc player itself. SonicStage V3.4 was the first version of the software where this restriction was removed, but it still required a MiniDisc recorder/player that also had the restriction removed. The Hi-MD model MZ-RH1, was the only such player available. Format extensions MD Data MD Data, a version for storing computer data, was announced by Sony in 1993 but never gained significant ground. Its media were incompatible with standard audio MiniDiscs, which has been cited as one of the main reasons behind the format's failure. MD Data can not write to audio MDs, only the considerably more expensive data blanks. It did see some success in a small number of multi-track recorders such as Sony's MDM-X4, Tascam's 564 (which could also record using standard MD-Audio discs, albeit only two tracks), and Yamaha's MD8, MD4, & MD4S. MD Data2 In 1997, MD Data2 blanks were introduced with 650 MB. They were only implemented in Sony's short-lived MD-based camcorder, the DCM-M1. MDLP In 2000, Sony announced MDLP (MiniDisc Long Play), which added new recording modes based on a new codec called ATRAC3. In addition to the standard, high-quality mode, now called SP, MDLP adds LP2 mode, which allows double the recording time – 160 minutes on an 80-minute disc – of good-quality stereo sound, and LP4, which allows four times more recording time – 320 minutes on an 80-minute disc – of medium-quality stereo sound. The bitrate of the standard SP mode is 292 kbit/s, and it uses separate stereo coding with discrete left and right channels. LP2 mode uses a bitrate of 132 kbit/s and also uses separate stereo coding. The last mode, LP4, has a bitrate of 66 kbit/s and uses joint stereo coding. The sound quality is noticeably poorer than the first two modes, but is sufficient for many uses. Tracks recorded in LP2 or LP4 mode play back as silence on non-MDLP players. NetMD NetMD recorders allow music files to be transferred from a computer to a recorder (but not in the other direction) over a USB connection. In LP4 mode, speeds of up to 32× real-time are possible and three Sony NetMD recorders (MZ-N10, MZ-N910, and MZ-N920) are capable of speeds up to 64× real-time. NetMD recorders all support MDLP. When transferring music in SP mode using NetMD with SonicStage, what is transferred is in-fact padded LP2. That is to say that the quality of the music is that of LP2 but recorded as SP. NetMD is a proprietary protocol, and it is currently impossible to use it without proprietary software, such as SonicStage. Thus, it cannot be used with non-Windows machines. A free *nix based implementation, libnetmd, has been developed. In 2019 a coder named Stefano Brilli compiled the linux-minidisc CLI into a web browser-based application, allowing users to transfer music via USB to modern devices. The libnetmd allows the user to upload SP files in full quality. Hi-MD Hi-MD is the further development of the MiniDisc-format. Hi-MD media will not play on non-Hi-MD equipment, including NetMD players. The Hi-MD format, introduced in 2004, marked a return to the data storage arena with its 1 GB discs and ability to act as a USB drive. Hi-MD units allow the recording and playback of audio and data on the same disc, and are compatible (both audio and data) with standard MiniDisc media – an 80-minute Minidisc blank could be formatted to store 305MB of data. Recording and transfer modes Modes marked in green are available for recordings made on the player, while those marked in red are available for music transferred from a PC. Capacities are official Sony figures; real world figures are usually slightly higher. Native MP3 support was added in Second generation Hi-MD players in the spring of 2005. SonicStage version 3.4, released in Feb 2006, introduced ripping CDs in bitrates 320 and 352 and added track transfer in ATRAC 192kbps to Hi-MD devices. See also Capacitance Electronic Disc (SelectaVision) – an RCA developed format that uses a disc inside a sleeve, like MD, created in 1964 Fidelipac Universal Media Disc (UMD) – a similar Sony format, but read-only Mini CD MiniDVD Mini Blu-ray References External links List of artists with a Minidisc release – Discogs Audiovisual introductions in 1992 Audio storage Consumer electronics Digital audio History of radio Sony products Discontinued media formats Japanese inventions
39214845
https://en.wikipedia.org/wiki/Therion%20%28software%29
Therion (software)
Therion is free and open-source cave surveying software designed to process survey data, generate maps and 3D models of caves, and archive the data describing the cave and the history of exploration. Therion was developed by the Slovak cavers Martin Budaj and Stacho Mudrak but is available in English. It runs on a wide variety of platforms including Linux, Windows and Mac OS X. Therion is available as part of standard distribution of Debian and Ubuntu (operating system) Linux distributions. It is free software, released under the terms of GNU GPL, with source code available. It does not require any other commercial software to run. The format of all files is human readable plain text (excluding 3D models), which semantically describe the cave, and are compiled by the program into various output forms such as 2D PDF or SVG maps, or 3D models. Other files like map overlays, terrain models and pictures can be incorporated into the output. A graphical editor is provided to help with the drawing process, and a 3D viewer ('loch') for viewing the models. Survex is used for the centreline error distribution if installed. The separation of drawing/semantic data entry and output rendering makes the program both complex to learn to use, but also capable of dealing with cave systems still being explored, where new finds and surveys need the drawing to morph to fit. It also allows a survey to be rendered with different national cave-symbol sets. The (UK-based) Cave Surveying Group has been using Therion along with PocketTopo and DistoXs to train cavers and improve the quality of surveying by the use of real time measurement that is available in Paperless surveying. Therion is used in several large projects for the documentation of cave systems. It is listed among 10 of the Best Free Linux Earth Science Software. It was used in a number of scientific projects. Not everyone finds the results satisfying: "Beginning in 2003, the Mulu Caves Project attempted to use Therion on several cave surveys. After many attempts over several years by many different cave surveyors, one of whom was closely affiliated with the software itself, no aesthetically pleasing results were produced." The unmatched feature of Therion software among other software tools for cave surveying is straightforward creating of 3D presentation for WEB pages. One may export 3D model from Therion to 3D format .lox, open it in Therion's 3D viewer Loch and export data as VTK. ParaView software is able to open such data and export them in WebGL format. To draw the maps the Speleo-Vulcain group from France is using Visual Topo for simple systems. However, because of the difficulties to build a rigorous synthesis and to update the survey of the complex Jean-Bernard System, they passed to the open source software Therion. Therion was analysed and used in thesis of Eliška Rákocy Two articles in the Annual Report of Cave Administration of the Czech Republic 2010 describe how Therion was used to document the Javoříčko Caves show cave. Described as "State of the Art Cave-Drawing software", Therion was reviewed in Compass, the Cave Surveying Journal of The British Cave Research Association. The review covered installation, use and development of the software. Since that article, written in 2004, Therion has continued to be developed and is now considered to be the most capable cave drawing software available. References External links Homepage of Therion (software) French wiki page about Therion (software) A review from Linuxlinks.com Paperless Surveying Disto X tool and PocketTopo software for in-cave drawing maps. Data could be exported to Therion - check User Guide. TopoDroid free software for Android platform for in-cave surveying. Surveying data and maps are in Therion format. Auriga free software for Palm OS for in-cave surveying with bidirectional data exchange provided with Compass, Survex, Therion, GH Topo/Toporobot, VisualTopo and Walls. BCRA Cave Surveying Group, Compass Points 33. Short review of Therion and other survey software packages (German). Joey Bwernard, "Spelunking with Linux", Linux Journal (), March 2014, pages23–28. Belltown Media, Inc., Houston, Texas. Cave surveying Geology software Software that uses wxWidgets
49049411
https://en.wikipedia.org/wiki/Ghost%20Push
Ghost Push
Ghost Push is a family of malware that infects the Android OS by automatically gaining root access, downloading malicious and unwanted software. The malware appears to have been discovered in September 2015 by the security research lab at Cheetah Mobile, who subsequently developed diagnostic software to determine whether a device has been compromised. As of September 2015, twenty variants were in circulation. Latter day versions employed routines which made them harder to detect and remove. The malware hogs all the system resources, making the phone slow, draining the battery and consuming cellular data. Advertisements continually appear either as full or partial screen ads or in the status bar. The applications installed by the malware appear to be difficult to remove, impervious to anti-virus software and even surviving a factory reset of the device. Infection typically comes via downloading applications from third-party app stores, where at least thirty-nine applications have been identified as carriers. At its peak, the Ghost Push virus infected more than 600,000 devices daily, with 50% of infections occurring from India, as well as from Indonesia and the Philippines, ranking second and third. The malware was discovered in September 2015 by Cheetah Mobile's security research lab. References Android (operating system) malware
35760578
https://en.wikipedia.org/wiki/Frank%20Heckl
Frank Heckl
Frank Heckl (born c. 1950) is an American former competition swimmer, seven-time Pan American Games medalist, and former world record-holder in two relay events. At the 1971 Pan American Games in Cali, Colombia, Heckl won a remarkable seven medals, six of them gold and one silver. His gold medal performances included three individual races: the 100- and 200-meter freestyle, and 100-meter butterfly (56.92); and three relay races: the 4x100-meter freestyle (3:32.15), 4x200-meter freestyle (7:45.82), and 4x100-meter medley (3:56.08). He also won a silver medal in the 200-meter medley (2:12.11). Heckl attended the University of Southern California (USC), where he swam for the USC Trojans swimming and diving team from 1969 to 1972. As Trojan swimmer, he was a member of seven NCAA championship relay teams. Heckl subsequently graduated from USC with his bachelor's and medical degrees, and has served as a team doctor for the U.S. national team, including the 1984 Olympics. As of 2015, he works in New Mexico as an orthopaedic surgeon specializing in sports medicine and arthroscopy. See also World record progression 4 × 100 metres freestyle relay World record progression 4 × 200 metres freestyle relay References 1950 births Living people American male butterfly swimmers American male freestyle swimmers World record setters in swimming Pan American Games gold medalists for the United States Pan American Games silver medalists for the United States Swimmers at the 1971 Pan American Games USC Trojans men's swimmers Pan American Games medalists in swimming Universiade medalists in swimming Universiade gold medalists for the United States Medalists at the 1970 Summer Universiade Medalists at the 1971 Pan American Games
351284
https://en.wikipedia.org/wiki/Dead%20man%27s%20switch
Dead man's switch
A dead man's switch (see alternative names) is a switch that is designed to be activated or deactivated if the human operator becomes incapacitated, such as through death, loss of consciousness, or being bodily removed from control. Originally applied to switches on a vehicle or machine, it has since come to be used to describe other intangible uses, as in computer software. These switches are usually used as a form of fail-safe where they stop a machine with no operator from a potentially dangerous action or incapacitate a device as a result of accident, malfunction, or misuse. They are common in such applications in locomotives, aircraft refuelling, freight elevators, lawn mowers, tractors, personal watercraft, outboard motors, chainsaws, snowblowers, tread machines, snowmobiles, amusement rides, and many medical imaging devices. On some machines, these switches merely bring the machines back to a safe state, such as reducing the throttle to idle or applying brakes while leaving the machines still running and ready to resume normal operation once control is reestablished. Dead man's switches are not always used to stop machines and prevent harm; such switches can also be used as a fail-deadly, since a spring-operated switch can be used to complete a circuit, not only to break it. This allows a dead man's switch to be used to activate a harmful device, such as a bomb or IED. The switch that arms the device is only kept in its "off" position by continued pressure from the user's hand. The device will activate when the switch is released, so that if the user is knocked out or killed while holding the switch, the bomb will detonate. The Special Weapons Emergency Separation System is an application of this concept in the field of nuclear weapons. A more extreme version is Russia's Dead Hand program, which allows for automatic launch of nuclear missiles should a number of conditions be met, even if all Russian leadership were to be killed. A similar concept is the handwritten letters of last resort from the Prime Minister of the United Kingdom to the commanding officers of the four British ballistic missile submarines. They contain orders on what action to take if the British government is destroyed in a nuclear attack. After a prime minister leaves office the letters are destroyed unopened. This concept has been employed with computer data, where sensitive information has been previously encrypted and released to the public, and the "switch" is the release of the decryption key, as with WikiLeaks' "insurance files". Background Interest in dead man's controls increased with the introduction of electric trams (streetcars in North America) and especially electrified rapid transit trains. The first widespread use came with the introduction of the mass-produced Birney One-Man Safety (tram) Car, though dead-man equipment was fairly rare on US streetcars until the successful PCC streetcar, which had a left-foot-operated dead man's pedal in conjunction with the right-foot-operated brake and power pedals. This layout has continued to be used on some modern trams around the world. In conventional steam railroad trains, there was always a second person with the engineer, the fireman, who could almost always bring the train to a stop if necessary. For many decades two people were assigned to electric and diesel locomotives as well, even though a single person could theoretically operate them. With modern urban and suburban railway systems, the driver is typically alone in an enclosed cab. Automatic devices were already beginning to be deployed on newer installations of the New York City Subway system in the early 20th century. The Malbone Street Wreck on the Brooklyn Rapid Transit system in 1918, though not caused by driver incapacitation, did spur the need for universal deployment of such devices to halt trains in the event of the operator's disability. According to a Manhattan borough historian, there have been at least three instances where the dead man's switch was used successfully – in 1927, 1940, and 2010. The status and operation of both vigilance and dead man's switch may be recorded on the train's event recorder (commonly known as a black box). Types Handle Many dead man's switches are mounted in the control handle of a vehicle or machine and engage if the operator ever loses their grip. Vehicles Handle switches are still used on modern trams and trains. Pneumatically or electrically linked dead man's controls involve relatively simple modifications of the controller handle, the device that regulates traction power. If pressure is not maintained on the controller, the train's emergency brakes are applied. Typically, the controller handle is a horizontal bar, rotated to apply the required power for the train. Attached to the bottom of the handle is a rod that when pushed down contacts a solenoid or switch inside the control housing. The handle springs up if pressure is removed, releasing the rod's contact with the internal switch, instantly cutting power and applying the brakes. Though there are ways that this type of dead man's control could conceivably fail, in practice they have proven highly reliable. On some earlier equipment, pressure was not maintained on the entire controller, but on a large button protruding from the controller handle. This button also had to be pressed continuously, typically with the palm of the hand so that the button was flush with the top of the handle. Another method used, particularly with some lever-type controllers, which are rotated rather than pushed or pulled, requires that the handle on the lever be turned through 90 degrees and held in that position while the train is in operation. Some dead man's controls only work in the mid position and not with full pressure (see pilot valve). In modern New York City Subway trains, for example, the dead man's switch is incorporated into the train's speed control. On the R142A car, the train operator must continually hold the lever in place in order for the train to move. An example of a passenger vehicle using a dead man's switch is on Tesla electric vehicles. When the driver has engaged the semi-autonomous driving system "Autopilot", they must keep their hands on the steering wheel. If the driver takes their hands off the steering wheel for more than 30 seconds, a loud alarm will sound inside the car to wake sleeping drivers, if the driver leaves their hands off for more than a minute, then the car will engage its hazard warning lights, and bring the car to a stop. This is done because the Autopilot system is not capable of full-self driving, and requires that the driver be able to take over operation of the vehicle without warning, should the car encounter a problem it doesn't know how to solve. This system uses a torque sensor on the steering wheel of the vehicle: when a driver is simply holding the wheel, they are still applying a small amount of torque to the wheel, confirming for the car that the driver is being attentive; if the driver turns the wheel with more force, all vehicle controls are handed back to the driver immediately. Machinery Handle-mounted dead man's switches are also used on many hand-held tools and lawn equipment, typically those that rotate or have blades such as saws, drills and lawn mowers. On saws for example, they incorporate a squeeze throttle trigger into the handle. If the user loses grip of the saw, the springs in the throttle trigger will push it back out to the off or idle setting, stopping the blade from spinning. Some tools go further and have a trigger guard built into the handle, similar to firearm safeties. Only when the user presses in the trigger guard first will it then release its lock on the trigger and allow the trigger to be pressed in. Typically, trigger guards can only be pressed in while the user has a firm grip of the handle. Every walk-behind mower sold in the US since 1982 has a dead man's switch called an "operator-presence control", which by law must stop the blades within three seconds after the user lets go of the controls. Attached across their handle is a mechanical lever connected by a flexible cable to the kill switch on the engine. While mowing, the operator must always squeeze the lever against the handle. If the operator ever loses grip of the handle the engine will die, stopping the blades from spinning and if equipped, any drive wheels from turning. This switch configuration also acts as the main kill switch for the engine. When the operator wants to stop the engine, they intentionally release the dead man's switch. Touch sensor On some vehicles, including the diesel-electric railway locomotives in Canada, and on Nottingham Express Transit vehicles, the tram's speed controller is fitted with a capacitive touch sensor to detect the driver's hand. If the hand is removed for more than a short period of time, the track brakes are activated. Gloves, if worn, have to be finger-less for the touch sensor to operate. A backup dead-man's switch button is provided on the side of the controller for use in the case of a failed touch sensor or if it is too cold to remove gloves. Pedal A pedal can be used instead of a handle. While some pedal switches must simply be held down in order for the machine to function (this system is often found on amusement rides, where the operator is likely to remain in a standing position for a lengthy period of time while the ride is in motion), this method has some shortcomings. In the Waterfall train disaster, south of Sydney, Australia, in 2003, it appeared that the driver slumped on his seat, keeping the pedal depressed when he died suddenly of a heart attack. This also happened to a Canadian National Railway Railliner passenger train in the 1970s, but the problem was noticed by other crew members and the train safely halted. There are some solutions to this issue that are now used in modern pedal systems. The pedal can have a vigilance function built in, where drivers must release and re-press the pedal in response to an audible signal. This prevents it from being defeated by the above circumstances and is a standard feature on most British DSD systems. Some types of locomotive are fitted with a three-position pedal, which must normally be kept in the mid position. This lessens the likelihood of accidentally defeating it, although it may still be possible to deliberately do so. Adding a vigilance function to this type of pedal results in a very safe system. However, isolation devices are still provided in case of equipment failure, so a deliberate override is still possible. These isolation devices usually have tamper-evident seals fitted for that reason. Seat switches The dead man's switch can also be located beneath the seat of a vehicle or machine and engages if the operator is not in the seat holding the switch down. On modern tractors, the switch will cut the engine while the transmission is engaged or the power take-off is spinning. On riding lawn mowers, the switch is often more extreme where the switch will cut the engine even if the mower is parked and the blades aren't spinning. Seat switches can also be used to keep small children from even starting the vehicle since they wouldn't weigh enough to completely hold down a switch adjusted to an adolescent's or adult's weight. Key switches On recreational vehicles such as boats, personal watercraft and snowmobiles, and on the control panel of many amusement rides, the user or operator has a cord or lanyard attached to his or her wrist or life jacket, that is in turn attached to a key mounted on the dead man's switch. Should the rider fall off the vehicle or the operator at least move away from the controls, the cord will be pulled out of the dead man's switch, turning off the engine or setting the throttle position to "idle". On powered boats in particular this cord is often called a "kill cord" (for powered boats use around the wrist is not recommended, as it may slip off without cutting the engine). If the helmsman goes overboard or is forced away from the controls, the engine cuts out. This prevents the boat from continuing under power but out of control, risking injury to anyone in or out of the water including passengers who may have fallen out or may still be in the boat, and collision damage to any property in the path of this out of control boat; this in turn prevents or limits damage to the boat itself from striking other objects. It is a common and dangerous practice to defeat the kill cord by fixing it to part of the boat instead of the operator; for convenience. This has been the cause of accidents, some of which were fatal or caused limb loss. Some luggage carts at airports and exercise treadmills have this feature. In the case of treadmills, the dead man's switch usually consists of an external magnet attached to a cord that clips to the user. If the user falls or walks away without turning off the treadmill, the switch cuts power to the treadmill belt. Altimeter switches Strategic Air Command developed a dead man's switch for its nuclear bombers, known as Special Weapons Emergency Separation System (SWESS), that ensured the nuclear payload detonated in the event of the crew becoming incapacitated through enemy action. The purpose of this device, unlike other examples mentioned above, was fail-deadly rather than fail-safe. Once armed, the system would detonate the onboard nuclear weapons if the aircraft dropped below a predetermined level, typically due to being shot down. Vigilance control The main safety failing with the basic dead man's system is the possibility of the operating device being held permanently in position, either deliberately or accidentally. Vigilance control was developed to detect this condition by requiring that the dead man's device be released momentarily and re-applied at timed intervals. There has also been a proposal to introduce a similar system to automotive cruise controls. A hybrid between a dead man's switch and a vigilance control device is a dead-man's vigilance device. Software Software versions of dead man's switches are generally only used by people with technical expertise, and can serve several purposes, such as sending a notification to friends or deleting and encrypting data. The "non-event" triggering these can be almost anything, such as failing to log in for 7 consecutive days, not responding to an automated e-mail, ping, a GPS-enabled telephone not moving for a period of time, or merely failing to type a code within a few minutes of a computer's boot. An example of a software-based dead man's switch is one that starts when the computer boots up and can encrypt or delete user-specified data if an unauthorized user should ever gain access to the protected computer. Google's Inactive Account Manager allows the account holder to nominate someone else to access their services if not used for an extended period (the default is three months). An example of a software-based dead man's switch is deadswitch.eu. Newer solutions available to the public utilize the growing market of mobile devices and instead of sending an automated e-mail, they will send a push notification directly to the mobile device and can alert family and friends in a much more convenient way. Spacecraft Many spacecraft use a form of dead man's switch to guard against command system failures. A timer is established that is normally reset by the receipt of any valid command (including one whose sole function is to reset the timer). If the timer expires, the spacecraft enters a "command loss" algorithm that cycles through a predefined sequence of hardware or software modes (such as the selection of a backup command receiver) until a valid command is received. The spacecraft may also enter a safe mode to protect itself while waiting for further commands. While having some similarities to a dead man's switch, this type of device (a command loss timer) is not actually a dead man's switch, because it aims to recover from a hardware failure rather than the absence of human operators. It is generally called a watchdog timer, and is also used extensively in nuclear power control systems. System components on a spacecraft that put it into a safe mode or cause it to execute default behaviors when no command is received within a predefined time window can be considered a dead man's switch, but hardware or software that attempts to receive a command from human operators through an alternate channel is an auto-recovering or adaptive communications system, not a dead man's switch. Voyager 2 recovered from a command receiver failure with a command loss timer. Train In most trains, a basic level of protection is provided by a "dead man's handle" or pedal. If the driver is taken ill and releases this, the power will be shut off and an emergency brake application will be initiated to stop the train. More recent safety standards do not consider this to be adequate, as the driver may slump over the dead man's handle and continue to hold it down even though they are not capable of controlling the train. Modern trains overcome this risk with the addition of a vigilance system to the dead man's system. A buzzer or bell sounds every minute or so in order to alert the motorman or engineer. If they do not respond by moving a controller, or releasing and then re-applying the dead man's handle, the system will automatically initiate an emergency brake application. Most major rail systems in the world use this equipment, both in their freight and passenger operations. It is also used on the R143 and other New York City Subway cars while under CBTC operation. In the US, older locomotives produced before 1995 do not carry this feature, but given the modular nature of the system it is not uncommon to find them retrofitted. Aircraft Some airplanes use vigilance control to minimize hypoxia, descending to lower altitude if the pilot is unresponsive. In 2019, the Garmin G3000 became the first general aviation avionics suite capable of automatically diverting an aircraft to the nearest airport and landing it in the event a pilot fails to interact with the aircraft's controls or respond to system prompts. This automation capability has been made possible by advancements in computing, control, and navigation technologies and is of particular importance in a general aviation setting since private aircraft are often flown by only a single pilot. Blackmail The term "dead man's switch" is sometimes used to describe a form of defensive blackmail or insurance file in which the release of damaging material is threatened if anything happens to a person. Alternative names Replacement of "switch" with "control" or name denoting a specific type of switch, e.g., "button", "trigger", "throttle", "pedal", "handle", "grip", or "brake" Replacement of "dead man's" or "dead-man" with "enabling" or "live-man" (commonly used in the robotics industry) "Driver's Safety Device" ("DSD") (the official term in the UK for switches of this type as used on railway trains) "Operator Presence Control" ("OPC") "Vigilance control" "Alerter system" (in higher-order systems in which the switch activates to sound an alarm rather than deactivates to disable the higher-order system) "Kill cord" on a boat. See also Security switch Train protection system References External links Deadmans on French trams and guided (trolley) buses (PDF) Kill Cords: Lessons from the Milly RIB Report Occupational safety and health Railway safety Safety switches Locomotive parts
57635804
https://en.wikipedia.org/wiki/12929%20Periboea
12929 Periboea
12929 Periboea, provisional designation: , is a dark Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 2 October 1999, by American astronomer Charles W. Juels at the Fountain Hills Observatory in Arizona. Originally considered a centaur, this now re-classified Jovian asteroid has a rotation period of 9.3 hours and belongs to the 80 largest Jupiter trojans. It was named from Greek mythology after Periboea, mother of Pelagon by the river-god Axius. Orbit and classification Periboea is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 5.0–5.4 AU once every 11 years and 12 months (4,378 days; semi-major axis of 5.24 AU). Its orbit has an eccentricity of 0.04 and a high inclination of 43° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar Observatory in September 1953, or 46 years prior to its official discovery observation at Fountain Hills. Classification as centaur Periboea was originally listed by the Minor Planet Center (MPC) as a centaur. However, its location close to , its low albedo and spectral slope, as well as its estimated dynamical lifetime of more than a billion years, led to the conclusion that the formerly classified centaur is indeed a Jupiter trojan. Numbering and naming This minor planet was numbered by the MPC on 22 December 1999 (). On 14 May 2021, the object was named by the Working Group Small Body Nomenclature (WGSBN), after Periboea, a mortal woman and eldest daughter of Acessamenus from Greek mythology. Periboea was the mother of Pelagon who she conceived by way of the river god Axius. Her grandson was the Trojan ally Asteropaios. Physical characteristics Periboea is an assumed C-type asteroid. It has a V–I color index of 0.88. Rotation period In 2007, four rotational lightcurves of Periboea have been obtained from photometric observations with a period of 10.4 and 10.422 hours (). Best-rated lightcurve, obtained by Stefano Mottola at the Calar Alto Observatory over seven nights from May to June 2009, gave a rotation period of hours with a brightness amplitude of 0.17 magnitude (). Diameter and albedo According to the surveys carried out by the Japanese Akari satellite and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, and observations by French and Spanish astronomers, Periboea measures between 51.5 and 55.34 kilometers in diameter and its surface has an albedo between 0.053 and 0.110. The Collaborative Asteroid Lightcurve Link assumes a standard albedo of a carbonaceous asteroid 0.057 and calculates a diameter of 61.04 kilometers based on an absolute magnitude of 9.8. References External links Asteroid Lightcurve Database (LCDB), query form (info ) Discovery Circumstances: Numbered Minor Planets (10001)-(15000) – Minor Planet Center Asteroid (12929) 1999 TZ1 at the Small Bodies Data Ferret 012929 Discoveries by Charles W. Juels Minor planets named from Greek mythology Named minor planets 19991002
879369
https://en.wikipedia.org/wiki/Grid.org
Grid.org
grid.org was a website and online community established in 2001 for cluster computing and grid computing software users. For six years it operated several different volunteer computing projects that allowed members to donate their spare computer cycles to worthwhile causes. In 2007, it became a community for open source cluster and grid computing software. After around 2010 it redirected to other sites. Volunteer computing projects From its establishment in April 2001 until April 27, 2007, grid.org was the website and organization that ran distributed computing projects such as the United Devices Cancer Research Project, led by Jikku Venkat, Ph.D and was sponsored philanthropically by United Devices (UD) and members participated in volunteer computing by running the UD Agent software (version 3.0). Cancer Research Project The United Devices Cancer Research Project, which began in 2001, was seeking possible drugs for the treatment of cancer using distributed computing. There were around 150,000 users in the United States and 170,000 in Europe along with hundreds of thousands more in other parts of the world. The project was an alliance of several companies and organisations: United Devices Inc. National Foundation for Cancer Research University of Oxford Department of Chemistry Donors of molecular research United Devices released the cancer research screensaver under the principle of using spare computing power. The program, which could be set to run continually, used "virtual screening" to find possible interactions between molecules and target proteins, i.e. a drug. These molecules (ligands) are sent to the host computer's UD Agent. When these molecules dock successfully with a target protein this interaction is scored for further investigation. The research consisted of two phases: Phase 1 tested over 3 billion drug-like molecules against 12 proteins which were known to be suitable targets for anti-cancer drugs. It used the "THINK" software for the simulation of the molecular interactions. Phase 2, using the "LigandFit" software developed by Accelrys to model interactions, sought to refine the Phase 1 data to produce a more manageable list of drug candidates for testing that would require experimental collaborators, including some from industry. Human Proteome Folding Project, phase 1 The IBM-sponsored Human Proteome Folding Project ("HPF"), phase 1, was announced on November 16, 2004 and was completed July 3, 2006. The project operated simultaneously on both grid.org and the IBM's World Community Grid. It made use of the "Rosetta" software to predict the structure of human proteins in order to help predict the function of proteins. This information may someday be used to help cure a variety of diseases and genetic defects. According to an announcement on the grid.org forums, after the HPF1 project was completed it was left to continue running on grid.org until August 9, 2006. During that time, members whose computers were configured to run this project got new work and spent computing resources calculating a result, but the result was returned to grid.org for points only—it was not used for scientific research. The status of the Human Proteome Folding Project caused some discussion on the grid.org forums. Most members wanted to see all available computing power directed toward the still-active cancer project, but UD representative Robby Brewer asserted that "some [users] like the screensaver". As noted above, in the end the redundant HPF1 work on grid.org was halted. Smallpox Project The Smallpox Research Grid was a part of United Devices "Patriot Grid" initiative to fight biological terrorism. This project helped analyze potential drug candidates for a medical therapy in the fight against smallpox virus. It made use of the "LigandFit" software (that had already been used by phase 2 of the Cancer Research project), but with a specialized set of target molecules that targeted the smallpox virus. The partners of the project included University of Oxford, the University of Western Ontario, Memorial Sloan–Kettering Cancer Center, Essex University, Evotec OAI, Accelrys, and IBM. The World Community Grid largely began because of the success of this project. Anthrax Project The Anthrax Research Project was a part of the United Devices "Patriot Grid" initiative to fight biological terrorism. It made use of the "LigandFit" software (that had already been used by phase 2 of the Cancer Research project), but with a specialized set of target molecules that targeted the advanced stages of anthrax bacterial infection. The project was operated from January 22, 2002 until February 14, 2002 and ended after a total of 3.57 billion molecules had finished screening. The results of the research project were transmitted to biological scientists in order to finish the screening of the computational simulations. The partners of the project included Oxford University. HMMER Project The HMMER Genetic Research project made use of the Hidden Markov model to search for patterns in genetic DNA sequences. Webload Project The Web Performance Testing project was operated as a commercial opportunity with select web hosting providers in order to help them test the scalability of their server infrastructures under periods of high-demand. Open source grid community In November 2007, grid.org was repositioned by Univa as a community to allow users to interact and discuss open source cluster and grid related topics. It allowed users to download, get support for, contribute to, and report issues about the open source Globus Toolkit based products offered by Univa. Over 100,000 unique visitors were reported in 2008. Around mid 2010 it redirected to Unicluster.org (a Univa product) and by 2012 it redirected to Univa's main site. See also List of distributed computing projects References Distributed computing projects Computing websites Internet properties established in 2001 Research projects Internet properties disestablished in 2007
24042
https://en.wikipedia.org/wiki/Project%20management
Project management
Project management is the process of leading the work of a team to achieve all project goals within the given constraints. This information is usually described in project documentation, created at the beginning of the development process. The primary constraints are scope, time, and budget. The secondary challenge is to optimize the allocation of necessary inputs and apply them to meet pre-defined objectives. The objective of project management is to produce a complete project which complies with the client's objectives. In many cases the objective of project management is also to shape or reform the client's brief to feasibly address the client's objectives. Once the client's objectives are clearly established they should influence all decisions made by other people involved in the project – for example project managers, designers, contractors and sub-contractors. Ill-defined or too tightly prescribed project management objectives are detrimental to decision making. A project is a temporary and unique endeavor designed to produce a product, service, or result with a defined beginning and end (usually time-constrained, and often constrained by funding or staffing) undertaken to meet unique goals and objectives, typically to bring about beneficial change or added value. The temporary nature of projects stands in contrast with business as usual (or operations), which are repetitive, permanent, or semi-permanent functional activities to produce products or services. In practice, the management of such distinct production approaches requires the development of distinct technical skills and management strategies. History Until 1900, civil engineering projects were generally managed by creative architects, engineers, and master builders themselves, for example, Vitruvius (first century BC), Christopher Wren (1632–1723), Thomas Telford (1757–1834) and Isambard Kingdom Brunel (1806–1859). In the 1950s organizations started to systematically apply project-management tools and techniques to complex engineering projects. As a discipline, project management developed from several fields of application including civil construction, engineering, and heavy defense activity. Two forefathers of project management are Henry Gantt, called the father of planning and control techniques, who is famous for his use of the Gantt chart as a project management tool (alternatively Harmonogram first proposed by Karol Adamiecki); and Henri Fayol for his creation of the five management functions that form the foundation of the body of knowledge associated with project and program management. Both Gantt and Fayol were students of Frederick Winslow Taylor's theories of scientific management. His work is the forerunner to modern project management tools including work breakdown structure (WBS) and resource allocation. The 1950s marked the beginning of the modern project management era where core engineering fields come together to work as one. Project management became recognized as a distinct discipline arising from the management discipline with engineering model. In the United States, prior to the 1950s, projects were managed on an ad-hoc basis, using mostly Gantt charts and informal techniques and tools. At that time, two mathematical project-scheduling models were developed. The "critical path method" (CPM) was developed as a joint venture between DuPont Corporation and Remington Rand Corporation for managing plant maintenance projects. The "program evaluation and review technique" (PERT), was developed by the U.S. Navy Special Projects Office in conjunction with the Lockheed Corporation and Booz Allen Hamilton as part of the Polaris missile submarine program. PERT and CPM are very similar in their approach but still present some differences. CPM is used for projects that assume deterministic activity times; the times at which each activity will be carried out are known. PERT, on the other hand, allows for stochastic activity times; the times at which each activity will be carried out are uncertain or varied. Because of this core difference, CPM and PERT are used in different contexts. These mathematical techniques quickly spread into many private enterprises. At the same time, as project-scheduling models were being developed, technology for project cost estimating, cost management and engineering economics was evolving, with pioneering work by Hans Lang and others. In 1956, the American Association of Cost Engineers (now AACE International; the Association for the Advancement of Cost Engineering) was formed by early practitioners of project management and the associated specialties of planning and scheduling, cost estimating, and cost/schedule control (project control). AACE continued its pioneering work and in 2006 released the first integrated process for portfolio, program and project management (total cost management framework). In 1969, the Project Management Institute (PMI) was formed in the USA. PMI publishes the original version of A Guide to the Project Management Body of Knowledge (PMBOK Guide) in 1996 with William Duncan as its primary author, which describes project management practices that are common to "most projects, most of the time." Project management types Project management methods can be applied to any project. It is often tailored to a specific type of projects based on project size, nature, industry or sector. For example, the construction industry, which focuses on the delivery of things like buildings, roads and bridges, has developed its own specialized form of project management that it refers to as construction project management and in which project managers can become trained and certified. The information technology industry has also evolved to develop its own form of project management that is referred to as IT project management and which specializes in the delivery of technical assets and services that are required to pass through various lifecycle phases such as planning, design, development, testing, and deployment. Biotechnology project management focuses on the intricacies of biotechnology research and development. Localization project management includes application of many standard project management practices to translation works even though many consider this type of management to be a very different discipline. There is public project management that covers all public works by the government which can be carried out by the government agencies or contracted out to contractors. Another classification of project management is based on the hard (physical) or soft (non-physical) type. Common among all the project management types is that they focus on three important goals: time, quality, and cost. Successful projects are completed on schedule, within budget, and according to previously agreed quality standards i.e. meeting the Iron Triangle or Triple Constraint in order for projects to be considered a success or failure. For each type of project management, project managers develop and utilize repeatable templates that are specific to the industry they're dealing with. This allows project plans to become very thorough and highly repeatable, with the specific intent to increase quality, lower delivery costs, and lower time to deliver project results. Approaches of project management A 2017 study suggested that the success of any project depends on how well four key aspects are aligned with the contextual dynamics affecting the project, these are referred to as the four P's: Aim & Expectations: What are the aims & expectations of the project. Plan: The planning and forecasting activities. Process: The overall approach to all activities and project governance. People: Including dynamics of how they collaborate and communicate. Power: Lines of authority, decision-makers, organograms, policies for implementation and the like. There are a number of approaches to organizing and completing project activities, including: phased, lean, iterative, and incremental. There are also several extensions to project planning, for example based on outcomes (product-based) or activities (process-based). Regardless of the methodology employed, careful consideration must be given to the overall project objectives, timeline, and cost, as well as the roles and responsibilities of all participants and stakeholders. Benefits realization management Benefits realization management (BRM) enhances normal project management techniques through a focus on outcomes (benefits) of a project rather than products or outputs, and then measuring the degree to which that is happening to keep a project on track. This can help to reduce the risk of a completed project being a failure by delivering agreed upon requirements (outputs) i.e. project success but failing to deliver the benefits (outcomes) of those requirements i.e. product success. In addition, BRM practices aim to ensure the strategic alignment between project outcomes and business strategies. The effectiveness of these practices is supported by recent research evidencing BRM practices influencing project success from a strategic perspective across different countries and industries. These wider effects are called the strategic impact. An example of delivering a project to requirements might be agreeing to deliver a computer system that will process staff data and manage payroll, holiday and staff personnel records in shorter times with reduced errors. Under BRM, the agreement might be to achieve a specified reduction in staff hours and errors required to process and maintain staff data after the system installation when compared with without the system. Critical path method Critical path method (CPM) is an algorithm for determining the schedule for project activities. It is the traditional process used for predictive based project planning. The CPM method evaluates the sequence of activities, the work effort required, the inter-dependencies and the resulting float time per line sequence to determine the required project duration. Thus, by definition, the critical path is the pathway of tasks on the network diagram that has no extra time available (or very little extra time)." Critical chain project management Critical chain project management (CCPM) is an application of the theory of constraints (TOC) to planning and managing projects, and is designed to deal with the uncertainties inherent in managing projects, while taking into consideration limited availability of resources (physical, human skills, as well as management & support capacity) needed to execute projects. The goal is to increase the flow of projects in an organization (throughput). Applying the first three of the five focusing steps of TOC, the system constraint for all projects, as well as the resources, are identified. To exploit the constraint, tasks on the critical chain are given priority over all other activities. Finally, projects are planned and managed to ensure that the resources are ready when the critical chain tasks must start, subordinating all other resources to the critical chain. Earned value management Earned value management (EVM) extends project management with techniques to improve project monitoring. It illustrates project progress towards completion in terms of work and value (cost). Earned Schedule is an extension to the theory and practice of EVM. Iterative and incremental project management In critical studies of project management, it has been noted that phased approaches are not well suited for projects which are large-scale and multi-company, with undefined, ambiguous, or fast-changing requirements, or those with high degrees of risk, dependency, and fast-changing technologies. The cone of uncertainty explains some of this as the planning made on the initial phase of the project suffers from a high degree of uncertainty. This becomes especially true as software development is often the realization of a new or novel product. These complexities are better handled with a more exploratory or iterative and incremental approach. Several models of iterative and incremental project management have evolved, including agile project management, dynamic systems development method, extreme project management, and Innovation Engineering®. Lean project management Lean project management uses the principles from lean manufacturing to focus on delivering value with less waste and reduced time. Project lifecycle There are five phases to a project lifecycle; known as process groups. Each process group represent a series of inter-related processes to manage the work through a series of distinct steps to be completed. This type of project approach is often referred to as "traditional" or "waterfall". The five process groups are: Initiating Planning Executing Monitoring and Controlling Closing Some industries may use variations of these project stages and rename them to better suit the organization. For example, when working on a brick-and-mortar design and construction, projects will typically progress through stages like pre-planning, conceptual design, schematic design, design development, construction drawings (or contract documents), and construction administration. While the phased approach works well for small, well-defined projects, it often results in challenge or failure on larger projects, or those that are more complex or have more ambiguities, issues and risk. Process-based management The incorporation of process-based management has been driven by the use of maturity models such as the OPM3 and the CMMI (capability maturity model integration; see :Image:Capability Maturity Model.jpg Project production management Project production management is the application of operations management to the delivery of capital projects. The Project production management framework is based on a project as a production system view, in which a project transforms inputs (raw materials, information, labor, plant & machinery) into outputs (goods and services). Product-based planning Product-based planning is a structured approach to project management, based on identifying all of the products (project deliverables) that contribute to achieving the project objectives. As such, it defines a successful project as output-oriented rather than activity- or task-oriented. The most common implementation of this approach is PRINCE2. Process groups Traditionally (depending on what project management methodology is being used), project management includes a number of elements: four to five project management process groups, and a control system. Regardless of the methodology or terminology used, the same basic project management processes or stages of development will be used. Major process groups generally include: Initiation Planning Production or execution Monitoring and controlling Closing In project environments with a significant exploratory element (e.g., research and development), these stages may be supplemented with decision points (go/no go decisions) at which the project's continuation is debated and decided. An example is the Phase–gate model. Initiating The initiating processes determine the nature and scope of the project. If this stage is not performed well, it is unlikely that the project will be successful in meeting the business’ needs. The key project controls needed here are an understanding of the business environment and making sure that all necessary controls are incorporated into the project. Any deficiencies should be reported and a recommendation should be made to fix them. The initiating stage should include a plan that encompasses the following areas. These areas can be recorded in a series of documents called Project Initiation documents. Project Initiation documents are a series of planned documents used to create order for the duration of the project. These tend to include: project proposal (idea behind project, overall goal, duration) project scope (project direction and track) product breakdown structure (PBS) (a hierarchy of deliverables / outcomes and components thereof) work breakdown structure (WBS) (a hierarchy of the work to be done, down to daily tasks) responsibility assignment matrix (RACI) (roles and responsibilities aligned to deliverables / outcomes) tentative project schedule (milestones, important dates, deadlines) analysis of business needs and requirements against measurable goals review of the current operations financial analysis of the costs and benefits, including a budget stakeholder analysis, including users and support personnel for the project project charter including costs, tasks, deliverables, and schedules SWOT analysis: strengths, weaknesses, opportunities, and threats to the business Planning After the initiation stage, the project is planned to an appropriate level of detail (see example of a flow-chart). The main purpose is to plan time, cost, and resources adequately to estimate the work needed and to effectively manage risk during project execution. As with the Initiation process group, a failure to adequately plan greatly reduces the project's chances of successfully accomplishing its goals. Project planning generally consists of determining the project management methodology to follow (e.g. whether the plan will be defined wholly up front, iteratively, or in rolling waves); developing the scope statement; selecting the planning team; identifying deliverables and creating the product and work breakdown structures; identifying the activities needed to complete those deliverables and networking the activities in their logical sequence; estimating the resource requirements for the activities; estimating time and cost for activities; developing the schedule; developing the budget; risk planning; developing quality assurance measures; gaining formal approval to begin work. Additional processes, such as planning for communications and for scope management, identifying roles and responsibilities, determining what to purchase for the project and holding a kick-off meeting are also generally advisable. For new product development projects, conceptual design of the operation of the final product may be performed concurrent with the project planning activities, and may help to inform the planning team when identifying deliverables and planning activities. Executing While executing we must know what are the planned terms that need to be executed. The execution/implementation phase ensures that the project management plan's deliverables are executed accordingly. This phase involves proper allocation, co-ordination and management of human resources and any other resources such as material and budgets. The output of this phase is the project deliverables. Project documentation Documenting everything within a project is key to being successful. To maintain budget, scope, effectiveness and pace a project must have physical documents pertaining to each specific task. With correct documentation, it is easy to see whether or not a project's requirement has been met. To go along with that, documentation provides information regarding what has already been completed for that project. Documentation throughout a project provides a paper trail for anyone who needs to go back and reference the work in the past. In most cases, documentation is the most successful way to monitor and control the specific phases of a project. With the correct documentation, a project's success can be tracked and observed as the project goes on. If performed correctly documentation can be the backbone to a project's success Monitoring and controlling Monitoring and controlling consists of those processes performed to observe project execution so that potential problems can be identified in a timely manner and corrective action can be taken, when necessary, to control the execution of the project. The key benefit is that project performance is observed and measured regularly to identify variances from the project management plan. Monitoring and controlling includes: Measuring the ongoing project activities ('where we are'); Monitoring the project variables (cost, effort, scope, etc.) against the project management plan and the project performance baseline (where we should be); Identifying corrective actions to address issues and risks properly (How can we get on track again); Influencing the factors that could circumvent integrated change control so only approved changes are implemented. Two main mechanisms support monitoring and controlling in projects. On the one hand, contracts offer a set of rules and incentives often supported by potential penalties and sanctions. On the other hand, scholars in business and management have paid attention to the role of integrators (also called project barons) to achieve a project's objectives. In turn, recent research in project management has questioned the type of interplay between contracts and integrators. Some have argued that these two monitoring mechanisms operate as substitutes as one type of organization would decrease the advantages of using the other one. In multi-phase projects, the monitoring and control process also provides feedback between project phases, to implement corrective or preventive actions to bring the project into compliance with the project management plan. Project maintenance is an ongoing process, and it includes: Continuing support of end-users Correction of errors Updates to the product over time In this stage, auditors should pay attention to how effectively and quickly user problems are resolved. Over the course of any construction project, the work scope may change. Change is a normal and expected part of the construction process. Changes can be the result of necessary design modifications, differing site conditions, material availability, contractor-requested changes, value engineering, and impacts from third parties, to name a few. Beyond executing the change in the field, the change normally needs to be documented to show what was actually constructed. This is referred to as change management. Hence, the owner usually requires a final record to show all changes or, more specifically, any change that modifies the tangible portions of the finished work. The record is made on the contract documents – usually, but not necessarily limited to, the design drawings. The end product of this effort is what the industry terms as-built drawings, or more simply, "as built." The requirement for providing them is a norm in construction contracts. Construction document management is a highly important task undertaken with the aid of an online or desktop software system, or maintained through physical documentation. The increasing legality pertaining to the construction industry's maintenance of correct documentation has caused the increase in the need for document management systems. When changes are introduced to the project, the viability of the project has to be re-assessed. It is important not to lose sight of the initial goals and targets of the projects. When the changes accumulate, the forecasted result may not justify the original proposed investment in the project. Successful project management identifies these components, and tracks and monitors progress, so as to stay within time and budget frames already outlined at the commencement of the project. Exact methods were suggested to identify the most informative monitoring points along the project life-cycle regarding its progress and expected duration. Closing Closing includes the formal acceptance of the project and the ending thereof. Administrative activities include the archiving of the files and documenting lessons learned. This phase consists of: Contract closure: Complete and settle each contract (including the resolution of any open items) and close each contract applicable to the project or project phase. Project close: Finalize all activities across all of the process groups to formally close the project or a project phase Also included in this phase is the Post Implementation Review. This is a vital phase of the project for the project team to learn from experiences and apply to future projects. Normally a Post Implementation Review consists of looking at things that went well and analyzing things that went badly on the project to come up with lessons learned. Project controlling and project control systems Project controlling (also known as Cost Engineering) should be established as an independent function in project management. It implements verification and controlling function during the processing of a project to reinforce the defined performance and formal goals. The tasks of project controlling are also: the creation of infrastructure for the supply of the right information and its update the establishment of a way to communicate disparities of project parameters the development of project information technology based on an intranet or the determination of a project key performance indicator system (KPI) divergence analyses and generation of proposals for potential project regulations the establishment of methods to accomplish an appropriate project structure, project workflow organization, project control and governance creation of transparency among the project parameters Fulfillment and implementation of these tasks can be achieved by applying specific methods and instruments of project controlling. The following methods of project controlling can be applied: investment analysis cost–benefit analysis value benefit analysis expert surveys simulation calculations risk-profile analysis surcharge calculations milestone trend analysis cost trend analysis target/actual-comparison Project control is that element of a project that keeps it on track, on-time and within budget. Project control begins early in the project with planning and ends late in the project with post-implementation review, having a thorough involvement of each step in the process. Projects may be audited or reviewed while the project is in progress. Formal audits are generally risk or compliance-based and management will direct the objectives of the audit. An examination may include a comparison of approved project management processes with how the project is actually being managed. Each project should be assessed for the appropriate level of control needed: too much control is too time-consuming, too little control is very risky. If project control is not implemented correctly, the cost to the business should be clarified in terms of errors and fixes. Control systems are needed for cost, risk, quality, communication, time, change, procurement, and human resources. In addition, auditors should consider how important the projects are to the financial statements, how reliant the stakeholders are on controls, and how many controls exist. Auditors should review the development process and procedures for how they are implemented. The process of development and the quality of the final product may also be assessed if needed or requested. A business may want the auditing firm to be involved throughout the process to catch problems earlier on so that they can be fixed more easily. An auditor can serve as a controls consultant as part of the development team or as an independent auditor as part of an audit. Businesses sometimes use formal systems development processes. These help assure systems are developed successfully. A formal process is more effective in creating strong controls, and auditors should review this process to confirm that it is well designed and is followed in practice. A good formal systems development plan outlines: A strategy to align development with the organization's broader objectives Standards for new systems Project management policies for timing and budgeting Procedures describing the process Evaluation of quality of change Characteristics of projects There are five important characteristics of a project: (i) It should always have a specific start and end dates. (ii) They are performed and completed by a group of people. (iii) The output is delivery on unique product or service. (iv) They are temporary in nature. (v) It is progressively elaborated. Examples are: designing a new car, writing a book. Project complexity Complexity and its nature plays an important role in the area of project management. Despite having number of debates on this subject matter, studies suggest lack of definition and reasonable understanding of complexity in relation to management of complex projects. Project complexity is the property of a project which makes it difficult to understand, foresee, and keep under control its overall behavior, even when given reasonably complete information about the project system. The identification of complex projects is specifically important to multi-project engineering environments. As it is considered that project complexity and project performance are closely related, it is important to define and measure complexity of the project for project management to be effective. Complexity can be: Structural complexity (also known as detail complexity, or complicatedness), i.e. consisting of many varied interrelated parts. It is typically expressed in terms of size, variety, and interdependence of project components, and described by technological and organizational factors. Dynamic complexity, which refers to phenomena, characteristics, and manifestations such as ambiguity, uncertainty, propagation, emergence, and chaos. Based on the Cynefin framework, complex projects can be classified as: Simple (or clear, obvious, known) projects, systems, or contexts. These are characterized by known knowns, stability, clear cause-and-effect relationships. They can be solved with standard operating procedures and best practices. Complicated: characterized by known unknowns. A complicated system is the sum of its parts. In principle, it can be deconstructed into smaller simpler components. While difficult, complicated problems are theoretically solvable with additional resources, with specialized expertise, with analytical, reductionist, simplification, decomposition techniques, with scenario planning, and following good practices. Complex: characterized by unknown unknowns, and emergence. Patterns could be uncovered, but they are not obvious. A complex system can be described by Euclid’s statement that the whole is more than the sum of its parts. Really complex projects, a.k.a. very complex, or chaotic: characterized by unknowables. No patterns are discernible in really complex projects. Causes and effects are unclear even in retrospect. Paraphrasing Aristotle, a really complex system is different from the sum of its parts. By applying the discovery in measuring work complexity described in Requisite Organization and Stratified Systems Theory, Dr Elliott Jaques classifies projects and project work (stages, tasks) into basic 7 levels of project complexity based on such criteria as time-span of discretion and complexity of a project's output: Level 1 Project – improve the direct output of an activity (quantity, quality, time) within a business process with targeted completion time up to 3 months. Level 2 Project – develop and improve compliance to a business process with targeted completion time from 3 months to 1 year. Level 3 Project – develop, change, and improve a business process with targeted completion time from 1 to 2 years. Level 4 Project – develop, change, and improve a functional system with targeted completion time from 2 to 5 years. Level 5 Project – develop, change, and improve a group of functional systems / business function with targeted completion time from 5 to 10 years. Level 6 Project – develop, change, and improve a whole single value chain of a company with targeted completion time from 10 to 20 years. Level 7 Project – develop, change, and improve multiple value chains of a company with target completion time from 20 to 50 years. Benefits from measuring Project Complexity is to improve project people feasibility by: Match the level of a project's complexity with effective targeted completion time of a project Match the level of a project's complexity with the respective capability level of the project manager Match the level of a project task's complexity with the respective capability of the project members Positive, appropriate (requisite), and negative complexity Similarly with the Law of requisite variety and The law of requisite complexity, project complexity is sometimes required in order for the project to reach its objectives, and sometimes it has beneficial outcomes. Based on the effects of complexity, Stefan Morcov proposed its classification as Positive, Appropriate, or Negative. Positive complexity is the complexity that adds value to the project, and whose contribution to project success outweighs the associated negative consequences. Appropriate (or requisite) complexity is the complexity that is needed for the project to reach its objectives, or whose contribution to project success balances the negative effects, or the cost of mitigation outweighs negative manifestations. Negative complexity is the complexity that hinders project success. Project managers A project manager is a professional in the field of project management. Project managers are in charge of the people in a project. People are the key to any successful project. Without the correct people in the right place and at the right time a project cannot be successful. Project managers can have the responsibility of the planning, execution, controlling, and closing of any project typically relating to the construction industry, engineering, architecture, computing, and telecommunications. Many other fields of production engineering, design engineering, and heavy industrial have project managers. A project manager needs to understand the order of execution of a project to schedule the project correctly as well as the time necessary to accomplish each individual task within the project. A project manager is the person accountable for accomplishing the stated project objectives on behalf of the client. Project Managers tend to have multiple years’ experience in their field. A project manager is required to know the project in and out while supervising the workers along with the project. Typically in most construction, engineering, architecture, and industrial projects, a project manager has another manager working alongside of them who is typically responsible for the execution of task on a daily basis. This position in some cases is known as a superintendent. A superintendent and project manager work hand in hand in completing daily project task. Key project management responsibilities include creating clear and attainable project objectives, building the project requirements, and managing the triple constraint (now including more constraints and calling it competing constraints) for projects, which is cost, time, quality and scope for the first three but about three additional ones in current project management. A typical project is composed of a team of workers who work under the project manager to complete the assignment within the time and budget targets. A project manager normally reports directly to someone of higher stature on the completion and success of the project. A project manager is often a client representative and has to determine and implement the exact needs of the client, based on knowledge of the firm they are representing. The ability to adapt to the various internal procedures of the contracting party, and to form close links with the nominated representatives, is essential in ensuring that the key issues of cost, time, quality and above all, client satisfaction, can be realized. A complete project manager, a term first coined by Dr. Robert J. Graham in his simulation, has been expanded upon by Randall L. Englund and Alfonso Bucero. They describe a complete project manager as a person who embraces multiple disciplines, such as leadership, influence, negotiations, politics, change and conflict management, and humor. These are all "soft" people skills that enable project leaders to be more effective and achieve optimized, consistent results. Multilevel success framework and criteria - project success vs. project performance There is a tendency to confuse the project success with project management success. They are two different things. “Project success” has 2 perspectives: the perspective of the process, i.e. delivering efficient outputs; typically called project management performance or project efficiency. the perspective of the result, i.e. delivering beneficial outcomes; typically called project performance (sometimes just project success). Project management success criteria are different from project success criteria. The project management is said to be successful if the given project is completed within the agreed upon time, met the agreed upon scope and within the agreed upon budget. Subsequent to the triple constraints, multiple constraints have been considered to ensure project success. However, the triple or multiple constraints indicate only the efficiency measures of the project, which are indeed the project management success criteria during the project lifecycle. The priori criteria leave out the more important after-completion results of the project which comprise four levels i.e. the output (product) success, outcome (benefits) success and impact (strategic) success during the product lifecycle. These posterior success criteria indicate the effectiveness measures of the project product, service or result, after the project completion and handover. This overarching multilevel success framework of projects, programs and portfolios has been developed by Paul Bannerman in 2008. In other words, a project is said to be successful, when it succeeds in achieving the expected business case which needs to be clearly identified and defined during the project inception and selection before starting the development phase. This multilevel success framework conforms to the theory of project as a transformation depicted as the input-process / activity-output-outcome-impact in order to generate whatever value intended. Emanuel Camilleri in 2011 classifies all the critical success and failure factors into groups and matches each of them with the multilevel success criteria in order to deliver business value. Risk management The United States Department of Defense states; "Cost, Schedule, Performance, and Risk" are the four elements through which Department of Defense acquisition professionals make trade-offs and track program status. There are also international standards. Risk management applies proactive identification (see tools) of future problems and understanding of their consequences allowing predictive decisions about projects. Work breakdown structure and other breakdown structures The work breakdown structure (WBS) is a tree structure that shows a subdivision of the activities required to achieve an objective – for example a portfolio, program, project, and contract. The WBS may be hardware-, product-, service-, or process-oriented (see an example in a NASA reporting structure (2001)). Beside WBS for project scope management, there are organizational breakdown structure (chart), cost breakdown structure and risk breakdown structure. A WBS can be developed by starting with the end objective and successively subdividing it into manageable components in terms of size, duration, and responsibility (e.g., systems, subsystems, components, tasks, sub-tasks, and work packages), which include all steps necessary to achieve the objective. The work breakdown structure provides a common framework for the natural development of the overall planning and control of a contract and is the basis for dividing work into definable increments from which the statement of work can be developed and technical, schedule, cost, and labor hour reporting can be established. The work breakdown structure can be displayed in two forms, as a table with subdivision of tasks or as an organisational chart whose lowest nodes are referred to as "work packages". It is an essential element in assessing the quality of a plan, and an initial element used during the planning of the project. For example, a WBS is used when the project is scheduled, so that the use of work packages can be recorded and tracked. Similarly to Work Breakdown Structure WBS, other decomposition techniques and tools are: Organization Breakdown Structure OBS, Product Breakdown Structure PBS, Cost Breakdown Structure CBS, Risk Breakdown Structure RBS, Resource Breakdown Structure ResBS. International standards There are several project management standards, including: The ISO standards ISO 9000, a family of standards for quality management systems, and the ISO 10006:2003, for Quality management systems and guidelines for quality management in projects. ISO 21500:2012 – Guidance on project management. This is the first International Standard related to project management published by ISO. Other standards in the 21500 family include 21503:2017 Guidance on programme management; 21504:2015 Guidance on portfolio management; 21505:2017 Guidance on governance; 21506:2018 Vocabulary; 21508:2018 Earned value management in project and programme management; and 21511:2018 Work breakdown structures for project and programme management. ISO 31000:2009 – Risk management. ISO/IEC/IEEE 16326:2009 – Systems and Software Engineering—Life Cycle Processes—Project Management Individual Competence Baseline (ICB) from the International Project Management Association (IPMA). Capability Maturity Model (CMM) from the Software Engineering Institute. GAPPS, Global Alliance for Project Performance Standards – an open source standard describing COMPETENCIES for project and program managers. HERMES method, Swiss general project management method, selected for use in Luxembourg and international organizations. The logical framework approach (LFA), which is popular in international development organizations. PMBOK Guide from the Project Management Institute (PMI). PRINCE2 from AXELOS. PM²: The Project Management methodology developed by the [European Commission]. Procedures for Project Formulation and Management (PPFM) by the Indian Ministry of Defence Team Software Process (TSP) from the Software Engineering Institute. Total Cost Management Framework, AACE International's Methodology for Integrated Portfolio, Program and Project Management. V-Model, an original systems development method. Program management. Project networks Some projects, either identical or different, can be managed as program management. Programs are collections of projects that support a common objective and set of goals. While individual projects have clearly defined and specific scope and timeline, a program’s objectives and duration are defined with a lower level of granularity. Besides programs and portfolios, additional structures that combine their different characteristics are: project networks, mega-projects, or mega-programs. A project network is a temporary project formed of several different distinct evolving phases, crossing organizational lines. Mega-projects and mega-programs are defined as exceptional in terms of size, cost, public and political attention, and competencies required. Project portfolio management An increasing number of organizations are using what is referred to as project portfolio management (PPM) as a means of selecting the right projects and then using project management techniques as the means for delivering the outcomes in the form of benefits to the performing public, private or not-for-profit organization. Portfolios are collections of similar projects. Portfolio management supports efficiencies of scale, increasing success rates, and reducing project risks, by applying similar standardized techniques to all projects in the portfolio, by a group of project management professionals sharing common tools and knowledge. Organizations often create Project Management Offices as an organizational structure to support project portfolio management in a structured way. Thus, PPM is usually performed by a dedicated team of managers organized within an Enterprise Project Management Office (PMO), usually based within the organization, and headed by a PMO director or Chief Project Officer. In cases where strategic initiatives of an organization form the bulk of the PPM, the head of the PPM is sometimes titled as the chief initiative officer. Project management software Project management software is software used to help plan, organize, and manage resource pools, develop resource estimates and implement plans. Depending on the sophistication of the software, functionality may include estimation and planning, scheduling, cost control and budget management, resource allocation, collaboration software, communication, decision-making, workflow, risk, quality, documentation, and/or administration systems. Virtual project management Virtual program management (VPM) is management of a project done by a virtual team, though it rarely may refer to a project implementing a virtual environment It is noted that managing a virtual project is fundamentally different from managing traditional projects, combining concerns of telecommuting and global collaboration (culture, time zones, language). See also Related fields Agile Construction Architectural engineering Construction management Cost engineering Facilitation (business) Industrial engineering Project Production Management Project management software Project portfolio management Project workforce management Software project management Systems engineering Related subjects Collaborative project management Decision-making Game theory Earned value management Human factors Kanban (development) Operations research Process architecture Program management Project accounting Project governance Project management simulation Small-scale project management Software development process Systems Development Life Cycle (SDLC) Lists Comparison of project management software Glossary of project management List of collaborative software List of project management topics Timeline of project management References External links Guidelines for Managing Projects from the UK Department for Business, Enterprise and Regulatory Reform (BERR) PM Foundation PM BLOG Management by type IEEE standards ISO/IEC standards Product development
33453827
https://en.wikipedia.org/wiki/Microsoft%20Store
Microsoft Store
Microsoft Store (formerly known as Windows Store) is a digital distribution platform owned by Microsoft. It started as an app store for Windows 8 and Windows Server 2012 as the primary means of distributing Universal Windows Platform apps. With Windows 10, Microsoft merged its other distribution platforms (Windows Marketplace, Windows Phone Store, Xbox Music, Xbox Video, Xbox Store, and a web storefront also known as "Microsoft Store") into Microsoft Store, making it a unified distribution point for apps, console games, and digital videos. Digital music was included until the end of 2017, and E-books were included until 2019. In 2021, 669,000 apps were available in the store. Categories containing the largest number of apps are "Books and Reference", "Education", "Entertainment", and "Games". The majority of the app developers have one app. As with other similar platforms, such as the Google Play and Mac App Store, Microsoft Store is curated, and apps must be certified for compatibility and content. In addition to the user-facing Microsoft Store client, the store has a developer portal with which developers can interact. Microsoft takes 5–15% of the sale price for apps and 30% on Xbox games. Prior to January 1, 2015, this cut was reduced to 20% after the developer's profits reached $25,000. History The Web-based storefront Microsoft previously maintained a similar digital distribution system for software known as Windows Marketplace, which allowed customers to purchase software online. The marketplace tracked product keys and licenses, allowing users to retrieve their purchases when switching computers. Windows Marketplace was discontinued in November 2008. At this point, Microsoft opened a Web-based storefront called "Microsoft Store". Windows 8 Microsoft first announced Windows Store, a digital distribution service for Windows at its presentation during the Build developer conference on September 13, 2011. Further details announced during the conference revealed that the store would be able to hold listings for both certified traditional Windows apps, as well as what were called "Metro-style apps" at the time: tightly-sandboxed software based on Microsoft design guidelines that are constantly monitored for quality and compliance. For consumers, Windows Store is intended to be the only way to obtain Metro-style apps. While announced alongside the "Developer Preview" release of Windows 8, Windows Store itself did not become available until the "Consumer Preview", released in February 2012. Updates to apps published on the store after July 1, 2019, are not available to Windows 8 RTM users. Per Microsoft lifecycle policies, Windows 8 has been unsupported since 2016. Windows 8.1 An updated version of Windows Store was introduced in Windows 8.1. Its home page was remodeled to display apps in focused categories (such as popular, recommended, top free and paid, and special offers) with expanded details, while the ability for apps to automatically update was also added. Windows 8.1 Update also introduced other notable presentation changes, including increasing the top app lists to return 1000 apps instead of 100 apps, a "picks for you" section, and changing the default sorting for reviews to be by "most popular". Updates to apps published on the Store after July 1, 2023, will not be available to Windows 8.1. Windows 10 Windows 10 was released with an updated version of the Windows Store, which merged Microsoft's other distribution platforms (Windows Marketplace, Windows Phone Store, Xbox Video and Xbox Music) into a unified store front for Windows 10 on all platforms, offering apps, games, music, film, TV series, themes, and ebooks. In June 2017, Spotify became available in the Windows Store. In September 2017, Microsoft began to re-brand Windows Store as Microsoft Store, with a new icon carrying the Microsoft logo. Xbox Store was merged into this new version of the platform. This is in line with Microsoft's platform convergence strategy on all Windows 10-based operating systems. Web apps and traditional desktop software can be packaged for distribution on Windows Store. Desktop software distributed through Windows Store are packaged using the App-V system to allow sandboxing. In February 2018, Microsoft announced that Progressive Web Apps would begin to be available in the Microsoft Store, and Microsoft would automatically add selected quality progressive web apps through the Bing crawler or allow developers to submit Progressive Web Apps to the Microsoft Store. Starting from Windows 10 version 1803, fonts can be downloaded and installed from the Microsoft Store. Updates to apps published on the Store after October 14, 2025 will not be available to Windows 10. Windows 11 In Windows 11, Microsoft Store received an updated user interface, and a new pop-up designed to handle installation links from websites. Microsoft also announced a number of changes to its policies for application submissions to improve flexibility and make the store more "open", including supporting "any kind of app, regardless of app framework and packaging technology", and the ability for developers to freely use first- or third-party payment platforms (in non-game software only) rather than those provided by Microsoft. Windows Server Windows Store is available in Windows Server 2012 but is not installed by default. It is unavailable in Windows Server 2016. However, UWP apps can be acquired from Microsoft Store for Business (formerly Windows Store for Business) and installed through sideloading. Details Microsoft Store is the primary means of distributing Windows Store apps to users. Although sideloading apps from outside the store are supported, out-of-box sideloading support on Windows 8 is only available on the Enterprise edition of Windows 8 running on computers that have joined a Windows domain. Sideloading on Windows RT and Windows 8 Pro, and on Windows 8 Enterprise computers without a domain affiliation, requires the purchase of additional licenses through volume licensing. Windows 10 removes this requirement, allowing users to freely enable or disable sideloading. Initially, Microsoft took a 30% cut of app sales until it reached US$25,000 in revenue, after which the cut dropped to 20%. On January 1, 2015, the reduction in cut at $25,000 was removed, and Microsoft takes a 30% cut of all app purchases, regardless of overall sales. Third-party transactions are also allowed, of which Microsoft does not take a cut. In early 2019, Microsoft lets app developers get 95% of app revenues, while Microsoft will only take 5% but only if user will download the app through a direct URL. Microsoft discontinued that option in 2020. Individual developers are able to register for US$19 and companies for US$99. As of August 1, 2021, Microsoft will reduce its cut to 12% for app sales. Windows apps In 2015, over 669,000 apps were available on the store, including apps for Windows NT, Windows Phone, and UWP apps, which work on both platforms. Categories containing the largest number of apps are "Games", "Entertainment", "Books and Reference", and "Education". The majority of the app developers have one app. Both free and paid apps can be distributed through Microsoft Store, with paid apps ranging in cost from US$0.99 to $999.99. Developers from 120 countries can submit apps to Microsoft Store. Apps may support any of 109 languages, as long as they support one of 12 app certification languages. Movies and TV shows Movies and television shows are available for purchase or rental, depending on availability. Content can be played on the Microsoft Movies & TV app (available for Windows 10, Xbox One, Xbox 360 and Xbox X/S), or Xbox Video app (available for Windows 8/RT PCs and tablets, and Windows Phone 8). In the United States, a Microsoft account can be linked to the Movies Anywhere digital locker service (separate registration required), which allows purchased content to be played on other platforms (e.g MacOS, Android, iOS). Microsoft Movies & TV is currently available in the following 21 countries: Australia, Austria, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, Ireland, Italy, Japan, Mexico, Netherlands, New Zealand, Norway, Spain, Sweden, Switzerland, the United States, and the United Kingdom. The purchase of TV shows is not currently supported in Belgium. Music (closed) On October 2, 2017, Microsoft announced that the sale of digital music on the Microsoft Store would cease on December 31 after the discontinuation of Groove Music Pass. Users were able to transfer their music to Spotify until January 31, 2018. Books (closed) Books bought from the Microsoft Store were formerly accessible on the EdgeHTML-based Microsoft Edge. The ability to open ePub e-books was removed during the shift to the Chromium-based Microsoft Edge. On April 2, 2019, Microsoft announced that the sale of e-books on the Microsoft Store had ceased. Due to DRM licenses that would not be renewed, all books became inaccessible by July 2019, and Microsoft automatically refunded all users that had purchased books via the service. Guidelines Similar to Windows Phone Store, Microsoft Store is regulated by Microsoft. Applicants must obtain Microsoft's approval before their app becomes available on the store. These apps may not contain, support or approve, gratuitous profanity, obscenity, pornography, discrimination, defamation, or politically offensive content. They may also not contain contents that are forbidden by or offensive to the jurisdiction, religion or norms of the target market. They may also not encourage, facilitate or glamorize violence, drugs, tobacco, alcohol and weapons. Video game console emulators that are "primarily gaming experiences or target Xbox One" and third-party web browsers that use their own layout engines, are prohibited on Microsoft Store. Microsoft has indicated that it can remotely disable or remove apps from end-user systems for security or legal reasons; in the case of paid apps, refunds may be issued when this is done. Microsoft initially banned PEGI "18"-rated content from the store in Europe. However, critics noted that this made the content policies stricter than intended, as some PEGI 18-rated games are rated "Mature" on the U.S. ESRB system, which is the next lowest before its highest rating, "Adults Only". The guidelines were amended in December 2012 to remove the discrepancy. On October 8, 2020, Microsoft announced a commitment to ten "principles" of fairness to developers in the operation of the Microsoft Store. These include transparency over its rules, practices, and Windows' "interoperability interfaces", not preventing competing application storefronts to run on Windows, charging developers "reasonable fees" and not "forc[ing]" them to include in-app purchases, allowing access to the store by any developer as long as their software meets "objective standards and requirements", not blocking apps based on their business model, how it delivers its services, or how it processes payments, not impeding developers from "communicating directly with their users through their apps for legitimate business purposes", not using private data from the store to influence the development of competing for software by Microsoft, and holding its own software to the same standards as others on the store. The announcement came in the wake of a lawsuits against Apple, Inc. and Google LLC by Epic Games over alleged anticompetitive practices conducted by their own application stores. With the release of Windows 11, Microsoft announced that it would not require software (excluding games) distributed via Microsoft Store to use its own payment platforms, and that it will also allow third-party storefronts (such as Amazon Appstore—which will be used for its upcoming Android app support, and Epic Games Store) to offer their clients for download via Microsoft Store. Developer portal In addition to the user facing Microsoft Store client, the store also has a developer portal with which developers can interact. The Windows developer portal has the following sections for each app: App Summary - An overview page of a given app, including a downloads chart, quality chart, financial summary, and a sales chart. App Adoption - A page that shows adoption of the app, including conversions, referrers, and downloads. App Ratings - A ratings breakdown, as well as the ability to filter reviews by region. App Quality - An overview page showcasing exceptions that have occurred in the app. App Finance - A page where a developer can download all transactions related to their app. Developer tools Microsoft Store provides developer tools for tracking apps in the store. The dashboard also presents a detailed breakdown of users by market, age, and region, as well as charts on the number of downloads, purchases, and average time spent in an app. See also List of Microsoft software Mac App Store, equivalent platform on macOS References External links Windows components Software distribution platforms Universal Windows Platform apps Windows 8 Windows 10 Windows 11 Xbox One software Online content distribution Online-only retailers of video games Video on demand Mobile software distribution platforms Online retailers of the United States Xbox One
1006264
https://en.wikipedia.org/wiki/Zeus%20Web%20Server
Zeus Web Server
Zeus Web Server is a discontinued proprietary high-performance web server for Unix and Unix-like platforms (including Solaris, FreeBSD, HP-UX and Linux). It was developed by Zeus Technology, a software company located in Cambridge, England that was founded in 1995 by University of Cambridge graduates Damian Reeves and Adam Twiss. History Zeus was designed to be a high-performance web server since the beginning of its development (1995) and since a few years later till the first decade of 2000s it was known as one of the fastest and most scalable web servers available on market. Despite its excellent performance, it never gained more than a few percentage points (less than 3%) in the global usage of most popular web servers. However, it was commonly used by hardware vendors submitting SPECweb99 benchmarks for their hardware. The SPECweb99 benchmark was retired in 2005 and replaced by SPECweb2005. While some SPECweb2005 submissions were made using Zeus, by 2008 it was no longer used by the top computer hardware performers. Support for AIX, Tru64, and Mac OS X was dropped on 10 June 2008. No new ZWS releases have been made since January 2010, and the company no longer offers a similar server solution. In July 2011, the company was acquired by Riverbed Technology, who ended support for ZWS on November 30, 2014. Features In addition to static content serving, Zeus supported dynamic content via CGI, FastCGI, Apache JServ, ISAPI, NSAPI, mod_perl, SSI and Zeus Distributed Authentication and Content (ZDAC), a proprietary FastCGI-like protocol. While Zeus mainly competed with other commercial web servers such as Oracle iPlanet Web Server, it also included a high degree of compatibility with Apache HTTP Server (including .htaccess support and a URL rewriting system comparable to Apache's mod_rewrite), with the expectation that Apache users would migrate to Zeus as their server load increased. NSAPI and ISAPI were added to ease migrations from Microsoft IIS and Sun Java System Web Server. See also Comparison of web server software ApacheBench program derived from original ZeusBench used to benchmark performance of Zeus Web Server. References External links Zeus Technology's site NOTE: no content, redirects to Brocade Communications Systems site ZWS Online Support (archived link 2008-02-11) newer content but not all available ZWS Online Support (archived link 2000-08-18) historical documentation Web server software Unix network-related software Discontinued software
15516115
https://en.wikipedia.org/wiki/Geo-replication
Geo-replication
Geo-replication systems are designed to provide improved availability and disaster tolerance by using geographically distributed data centers. This is intended to improve the response time for applications such as web portals. Geo-replication can be achieved using software, hardware or a combination of the two. Software Geo-replication software is a network performance-enhancing technology that is designed to provide improved access to portal or intranet content for uses at the most remote parts of large organizations. It is based on the principle of storing complete replicas of portal content on local servers, and then keeping the content on those servers up-to-date using heavily compressed data updates. Portal acceleration Geo-replication technologies are used to provide replication of the content of portals, intranets, web applications, content and data between servers, across wide area networks WAN to allow users at remote sites to access central content at LAN speeds. Geo-replication software can improve the performance of data networks that suffer limited bandwidth, latency and periodic disconnection. Terabytes of data can be replicated over a wide area network, giving remote sites faster access to web applications. Geo-replication software uses a combination of data compression and content caching technologies. differencing technologies can also be employed to reduce the volume of data that has to be transmitted to keep portal content accurate across all servers. This update compression can reduce the load that portal traffic place on networks, and improve the response time of a portal. Portal replication Remote users of web portals and collaboration environments will frequently experience network bandwidth and latency problems which will slow down their experience of opening and closing files, and otherwise interacting with the portal. Geo-replication technology is deployed to accelerate the remote end user portal performance to be equivalent to that experienced by users locally accessing the portal in the central office. Differencing engine technologies To deliver this reduction in the size of the required data updates across a portal, geo-replication systems often use differencing engine technologies. These systems are able to difference the content of each portal server right down to the byte level. This knowledge of the content that is already on each server enables the system to rebuild any changes to the content on one server, across each of the other servers in the deployment from content already hosted on those other servers. This type of differencing system ensures that no content, at the byte level, is ever sent to a server twice. Offline portal replication on laptops Geo-replication systems are often extended to deliver local replication beyond the server and down to the laptop used by a single user. Server to laptop replication enables mobile users to have access to a local replica of their business portal on a standard laptop. This technology may be employed to provide in the field access to portal content by, for example, sales forces and combat forces. Geo-replication systems iOra Syntergy Colligo Contributor See also Load balancing Round robin DNS References Business software Geography Cloud computing
60538693
https://en.wikipedia.org/wiki/Troy%20%28opera%29
Troy (opera)
Troy () is an opera in two acts by Bujor Hoinic set to a Turkish-language libretto by Artun Hoinic, based on the ancient Greek epic poem Iliad by Homer. It premiered in Ankara, Turkey, in 2018 and was restaged at Bolshoi Theatre in Moscow, Russia, in 2019. Overview Troy was composed by Bujor Hoinic, the Romanian chief conductor of the Turkish State Opera and Ballet. The libretto was written by Artun Hoinic. It is the first Turkish opera written and composed by foreigners. Based on the 8th-century BC Greek epic poem Iliad by Homer, the opera was completed in May 2018 after three-and-half months' work. The eight scenes of the two-act opera handle the war and life at Troy (modern Hisarlik, northwestern Turkey). To Hoinic "it blends an ancient civilization with modern music, where ancient modes, secret makams and Anatolian rhythms are used, too". The premiere took place at ATO Congresium in Ankara, Turkey, performed by Ankara State Opera and Ballet on 9 November 2018. The opera was commissioned by the Ministry of Culture and Tourism as part of the "2018 Tourism Year of Troy" in Turkey, declared in connection with the 20th anniversary of Troy's entry into the UNESCO World Heritage Site list. On 8 April 2019, Troy was staged at Bolshoi Theatre in Moscow, for the opening of the "2019 Russia–Turkey Cultural Year". Roles Synopsis Act 1 Scene 1: The palace of King Menelaus in Sparta Menelaus, the king of Sparta, invites all the Greek kings and princes of Troy, Paris and Hector, to the wedding of Thetis and Peleus in Sparta. During the celebrations following the wedding, Paris is attracted to Queen Helen the Beautiful, the wife of Menelaus. She responds to Paris' feelings. Scene 2: The temple of goddess Artemis in Aulis Menelaus understands that Paris has kidnapped Helen. He asks his brother Agamemnon, the king of Mycenae, for help. Agamemnon gathers all the Greek kings and heroes in the temple. He accuses Troy of treason, and demands the Trojan to account. He orders the Greek heroes Achilles, Odysseus and Ajax to be ready for a war with Troy. His aim is to attack Troy as soon as possible, invade this country and avenge his brother. He consults Priest Chalcas for future telling. The priest advises him to make a sacrifice to the goddess Artemis for a victory. Agamemnon's daughter, the young princess Ephigenia is brought to the alter accompanied by rituals. Scene 3: Trojan palace. The sun shines on Troy Paris and Hector return to Troy with Helen, and they meet King Priam. The Trojans are in panic, and Helen is worried of a war due to her kidnapping. Witnessing the love between Paris and Helen, Priam declares that Helen is now a Trojan princess. He asserts to the nobles and the people that the walls of Troy can never be crossed in case of a war. Scene 4: Outside the walls of Troy. Dark clouds are on Troy Agamemnon and his heroes appear with the army of Sparta before the walls of Troy. Priam, the Trojan princes and the nobles, in front of the Trojan army, move out of the walls. As both kings, Agamemnon and Priam, can not reach a peaceful solution, the Greek army starts an attack while the Trojan army, led by Hector, goes into defense after Priam is secured. When the war begins, the two heroes, Hector and Achilles, come face to face in a duel, which ends with Achilles killing Hector. The Greek army attacks Troy, however, cannot cross the strong Trojan walls. Act 2 Scene 5: The square inside the castle of Troy Hector's body is brought to a square inside the castle for cremation in front of the gathered Trojan folk. His father Priam puts two gold coins on Hector's eyes to let him easily travel to the "land of the dead" before the corpse is set to burn. Helen is sad and tries to escape to prevent a war. Paris persuades her to stay saying that Agamemnon's real intention is to invade Troy, even if Helen would go back. Scene 6: Greek army camp outside of Troy walls After preparations, the Greek army attacks Troy. A blood bath takes place, but the Trojan army creates an indispensable defense. The Greeks experience a great defeat. Also Greek hero Ajax and some soldiers are captured by Troy. Priam and his soldiers chase the Greek army fleeing to their ships. Scene 7: The great square in front of King Priam's palace in Troy Priam appears with Paris and Helen before the gathered people and announces their great victory. Ajax and other Greek prisoners of war are brought to the square. The people want to lynch the prisoners. However, Priam forgives them and sets them free. Paris learns that the Greek army left a gigantic wooden horse structure outside the city. It is taken inside the walls as a sign of victory although the nobles recommend its burning. The celebrations around the horse structure continue until the late hours of the night. Scene 8: Midnight at the big square of Troy After the Trojan soldiers and the people fall asleep following the day-long celebrations, Achilles, Odysseus and the myrmidons come out of the horse. They open the city gates for the hidden Greek army after the guards are made ineffective. The city suddenly turns into a battlefield and Trojans are slaughtered one by one, and Troy burns in flames. Paris spots Achilles and shoots an arrow for the revenge of Hector. The arrow hits Achilles' tendon in his leg leaving him vulnerable. A second arrow hits Achilles in the chest. He removes it but dies. During the massacre, Agamemnon kills Priam brutally while Paris, Helen and a few remaining nobles flee from Troy for their lives. References Operas Turkish-language operas 2018 operas Operas set in Turkey Operas based on classical mythology Operas based on works by Homer Works based on the Iliad Agamemnon
2458839
https://en.wikipedia.org/wiki/Tuxissa
Tuxissa
Tuxissa is a fictional computer virus hoax made up by Humorix, a humor website on Linux. Although the website states that all articles there are fake, anti-virus software makers such as Symantec, Sophos and F-Secure had pages for the Tuxissa virus hoax. Hoax The virus is based on the Melissa virus, with its aim to install Linux onto the victim's computer without the owner's notice. It is spread via e-mail, contained within a message titled "Important Message About Windows Security". It first spreads the virus to other computers, then it downloads a stripped-down version of Slackware, and uncompresses it onto the hard disk. The Windows Registry is finally deleted, and the boot options changed. There the virus destroys itself when it reboots the computer at the end, with the user facing the Linux login prompt. See also List of computer virus hoaxes Footnotes External links Symantec's security response to the virus Sophos' virus info on Tuxissa Humorix's article, where the joke first started F-Secure anti-virus software program's page about the Tuxissa virus Virus hoaxes Fictional computer viruses
50189271
https://en.wikipedia.org/wiki/Shooty%20Skies
Shooty Skies
Shooty Skies is an arcade shooting game created by Hipster Whale and Mighty Games for iOS, Android, Windows, macOS, and Linux. It was released on iOS on September 30, 2015, and on Android on November 6, 2015; it was later released on Windows, macOS, and Linux on March 6, 2018. It is an endless arcade shoot 'em up game that involves shooting all enemies coming up from the screen, including broken TVs, robots and arcade machines. It takes numerous elements from classic arcade games including Galaga, Space Invaders, 1942, Xevious and Raiden. The player can play as one of 33 characters which can be unlocked through a random draw (bought for 500 coins) or an in-app purchase of $0.99. Another version, titled: "Shooty Skies Overdrive" was announced in June 23th, 2020 in the official Shooty Skies website, this version lets the player to shoot all enemies on a retro world in Virtual Reality, it was released and available on Oculus and Steam. References 2015 video games Android (operating system) games iOS games Mobile games Windows games MacOS games Linux games
32055949
https://en.wikipedia.org/wiki/National%20Broadband%20Network
National Broadband Network
The National Broadband Network (NBN) is an Australian national wholesale open-access data network. It includes wired and radio communication components rolled out and operated by NBN Co, a Government-owned corporation. Internet service providers, known under NBN as retail service providers or RSPs, contract with NBN to access the data network and sell fixed Internet access to end users. Rationales for this national telecommunications infrastructure project included replacing the existing copper cable telephony network that is approaching end of life, and the rapidly growing demand for Internet access. As initially proposed by the Rudd Government in 2009, wired connections would have provided up to 100 Mbit/s (later increased to 1000 Mbit/s), decreased to a minimum of 25 Mbit/s in 2013 after the election of the Abbott Government. As the largest infrastructure project in Australia's history, NBN was the subject of significant political contention and has been an issue in federal elections. The Liberal Party initially stated that the "Multi-Technology Mix" (MTM) would be completed by 2016, however this was changed after the election to 2019 and then again to 2020. The project cost jumped from the Liberal Party's estimated $29.5 billion before the 2013 federal election, to $46–56 billion afterwards. In 2016 NBN Co. said it was on target for $49 billion, but by late 2018 the estimated final cost was $51 billion. History 2007 A fast broadband initiative was announced in the run-up to the 2007 federal election by the Labor opposition with an estimated cost of including a government contribution of $4.7 billion that would be raised in part by selling the Federal Government's remaining shares in Telstra. The Labor Party Rudd government was elected on 24 November 2007 and initial planning commenced. The NBN was originally to deliver its wholesale service through fibre to the node (FTTN) and reach approximately 98% of premises in Australia by . A new satellite network would be built to reach the rest of the country. 2008 An initial request for proposal (RFP) to build the NBN was issued but not executed. Organisations lodging compliant proposals were neither able to meet the requirements nor able to raise the necessary capital. A non-compliant proposal was received from Telstra and they were excluded from consideration. 2009 The Rudd Government announced it would bypass the existing copper network by constructing a new national network combining fibre to the premises (FTTP), fixed wireless and satellite technologies. The first Rudd government had proposed to develop a modern optical fibre telecommunications network to provide broadband access to 93% of the Australian population at 100 Mbit/s, with those areas and people outside the network footprint to be provided broadband access through fixed wireless and geosynchronous telecommunications satellite. The cost estimate rose to $43 billion and later revised to $37.4 billion. The project was to be financed by a combination of a Federal Government investment of $30.4 billion and private investment for the remainder. Dividends were to be paid after completion in 2021 to the federal government, with the government's contribution repaid by 2034. A return on investment of 7.1% was expected on revenue of $23.1 billion by 2021. Tasmania was selected for a trial deployment based on the Tasmanian Government's submission to the RFP. A forced structural separation of Telstra was threatened but not completed. NBN Co was established on 9 April 2009 and Mike Quigley appointed chief executive officer on 25 July. 2010 An implementation study was commissioned in April 2009 and released on 6 May. In April, NBN Co issued a request for tender (RFT) for the major FTTP rollout. Fourteen vendors submitted a proposal; however, NBN Co suspended the process on |1 April 2011, as the prices were unacceptably high. The first FTTP customers were connected in July 2010. The Gillard Government was elected at the 2010 Australian federal election. As a minority government priority was given to regional and rural areas, areas from which supporting cross-bench MPs were elected. An increase in the peak speed to one gigabit per second was announced in response to Google Fiber developments in the USA. After the election Opposition Leader Tony Abbott appointed Malcolm Turnbull as Shadow Minister for Communications and Broadband, stating that he believed the NBN to be a white elephant and that Turnbull had "the technical expertise and business experience to entirely demolish the government on this issue". NBN Co's business plan was released on , including forecasts and network design incorporating these priorities. Tasmania was selected as the first state for a three-stage trial FTTP rollout. Stage one was announced in July 2009. The first customers were connected a year later. Stages two and three were announced on 21 October 2009 and 1 March 2010, respectively. Redesign Originally, NBN Co planned for a centralised model with only 14 points of interconnect (PoIs); however, that was overruled by the Federal Government on the advice from the Australian Competition & Consumer Commission (ACCC). The ACCC considered the plan to be 'mission creep' and would have given NBN Co a monopoly over backhaul; however, NBN Co said centralised model would have allowed smaller RSPs to connect without going through a wholesale aggregator. ACCC recommended 121 Pols after public consultation. Internode criticised the 'insane' number of POIs and after its pricing announcement warned it might have to charge more in regional areas because of the increased costs. In response Turnbull said the 'government can't deliver on a crucial promise' of 'national uniform pricing'; however, Minister for Communications, Stephen Conroy said we 'guaranteed uniform wholesale pricing' not retail pricing. Internode (in 2010) warned that increasing the number of POIs was likely to lead to consolidation in the ISP industry. Following this warning the industry consolidated, resulting in four major RSPs (Telstra, Optus, TPG Telecom and Vocus Communications) who accounted for the majority of the market share. 2011 The Parliament passed the National Broadband Network Companies Act 2011 and a related bill on . The RFT of April 2010 was suspended process on , as the prices were unacceptably high. NBN Co contracted with Ericsson on to design, build and operate the network with options to extend the contract for up to 10 years at a total cost of $1.1 billion. Construction commenced in 2011, with the first five locations announced as the regional and rural communities surrounding Geraldton, Toowoomba, Tamworth, Ballarat and Darwin. NBN Co entered into an agreement worth up to $380 million with Silcar on . The agreement covered the construction of the NBN in Queensland, New South Wales and the ACT by Silcar, a company joint-owned by Siemens and Thiess. The agreement includes the option of a two-year extension with an additional value of $740 million. NBN Co signed an agreement with Telstra on estimated to be worth $9 billion post-tax net present value, building upon the signing of a financial heads of agreement a year beforehand. Telstra was not required to separate retail and wholesale operations, instead agreeing to disconnect its Internet customers from the copper and hybrid fibre-coaxial networks in areas where FTTP has been installed and agreed to lease dark fibre, exchange space and ducts to NBN Co. Telstra would not be able to market their mobile network as an alternative to the NBN for a number of years. NBN Co signed an agreement with Optus on estimated to be worth $800 million post-tax net present value over its hybrid fibre-coaxial network. This was a great addition to the network Following low take-up rates in Tasmania, the government adopted an opt-out model in which users are assumed to want the service unless they explicitly opt-out. Fourteen second-release sites comprising 54,000 premises in all states and territories were announced on with construction commencing in August. Telstra allowed NBN Co to use its exchanges and ducts in the second release sites before agreement with Telstra was finalised. Interim satellite service NBN Co launched interim satellite services on , providing up to six megabits per second. Due to the limited satellite capacity, these services were given to customers who did not have access to alternative 'metro comparable' services, similar to the Federal Government's Australian Broadband Guarantee (ABG) program which ended on . The criteria for alternative 'metro comparable' services were minimum data speeds of at least 512 kilobits per second, a 3 GB per month data allowance and a total price to the end customer of no more than $2,500 over three years. To provide these services NBN Co bought managed satellite services and satellite capacity from Optus for and additional satellite capacity from IPstar for $100 million. Five areas comprising around 14,000 premises were chosen as the 'first mainland sites', each representing rollout challenges the NBN expected to face during an Australia-wide rollout. The first services went live on 19 April 2011. 2012 Attorney-General Nicola Roxon blocked Huawei from seeking a supply contract for the National Broadband Network, on the advice of the Australian Security Intelligence Organisation. The Australian government feared Huawei would provide backdoor access for Chinese cyber espionage. Significant attacks were made by the Liberal/National Coalition opposition leading up to the 2013 election. These focused on the estimated cost and timeline for implementation. The build cost had been a key point of debate. Turnbull and Abbott stated that they would take an 'agnostic' approach. They argued that the demand for such a service was not significant, and thus that the estimated cost was too high and the timeline for implementation was too long. 2013 The Multi-Technology Mix (MTM) was selected as the approach to broadband provision by the Liberal–National coalition in the lead up to the 2013 Australian federal election. After the 2013 election, the Abbott Government announced immediate changes to the NBN: most of the NBN Co board was asked to resign; Ziggy Switkowski was appointed Chairman; and rollout was moved from FTTP to 'alternative technologies' such as fibre to the node. The government limited the rollout of FTTP to those areas already in development. Later implementation of the Multi-Technological Mix (MTM) began with the promise of earlier completion and significant cost savings compared to the earlier approach. The predominant change was the adoption of a mixed copper-optical technology with fibre to the node (FTTN). Studies and a strategic review were commissioned. As of 3 November 2013, construction of the network had passed 354,793 premises and 109,862 customer services were active. In areas where the FTTP network was rolled out, a similar agreement with Optus was in place. On 12 December, the NBN Co board appointed Bill Morrow as NBN Co's new CEO, replacing Quigley. Telstra asserted its intention to retain the $11bn value it generates from the previous government's deal. Delays occurred when work was stopped for several weeks on sites where asbestos was found in Telstra pits. Turnbull announced the MTM approach promising significant savings and earlier completion. The MTM added fibre to the node (FTTN) as the preferred technology; and kept hybrid fibre-coaxial (HFC) (previously planned to be shut down). 2014 In April 2014, The Australian newspaper judged the Tasmania rollout as shambolic and abysmal. The final MTM approach was finalised. Initial costs and timing for the Coalition NBN were $29.5 billion of public funding to construct by 2019. In May NBN announced that it would be targeting premises that were already serviced with fibre by rival TPG. 2015 Quigley publicly attacked the NBN and the MTM, noting cost blowouts and delays that he said were the fault of changes made by the Coalition government. As of 30 June, 1,011,973 premises were able to order services, 571,527 brown fields. 180,796 greenfields premises were able to order fixed-line services, 220,917 fixed wireless, and 38,743 interim satellite service. 485,615 users were active. As of 31 March, 64,102 premises of the 722,031 premises passed were classed as being 'service class zero'—"the Service Class that applies to a Premises that is not NBN Serviceable for the purposes of the NFAS but is in the footprint of the NBN Co Fibre Network." 2016 Turnbull became Prime Minister and Mitch Fifield became the Minister for Communications. At 30 June, NBN Co had passed 2,893,474 premises across all technologies. Company annual revenue was $421 million compared to $164 million in 2015 financial year, with approximately 1,100,000 active user at 30 June. NBN Co found no significant demand for wired connections above 25 Mbit/s (despite public surveys indicating otherwise) and upgrading the network would not be considered until demand for high-bandwidth services was proven. 2017 A 2017 report by the Joint Standing Committee on NBN found significant technology issues and company performance. All but one of the Coalition members of the committee released a dissenting report strongly defending the NBN and NBN Co. In 2017, Morrow wrote a public blog post arguing that the New Zealand program Ultra-Fast Broadband operated in a different policy setting, with Telecom New Zealand separated into Chorus (wholesale) and Spark (retail). On 23 October, Turnbull said, "The NBN was a calamitous train wreck of a project when we came into government in 2013," and argued that the NBN might never make a profit. Turnbull commented on New Zealand's program "They basically ensured the incumbent telco, the Telstra equivalent, split its network operations away from its retail operations. And then that network company in effect became the NBN. The virtue of that was you actually had a business that knew what it was doing, that was up and running, that had 100 years of experience getting on with the job." Morrow admitted that 15% of end users receive poor service through NBN and are 'seriously dissatisfied'. In addition, Morrow indicated that in July, prices and performance for end users were suppressed through a 'price war' between RSPs. However, despite this comment, the Telecommunications Industry Ombudsman released its annual reporting showing a 159% increase in NBN complaints with nearly 40% of NBN customers dissatisfied. In response to the imminent broadcast of a critical documentary, Turnbull stated that NBN was a failure, blaming the earlier Rudd and Gillard governments. The documentary noted significant issues with the rollout and complaints regarding performance of the NBN. Following the Prime Minister's acknowledgment of the NBN's failure, Rudd noted that, on assuming government in 2013, Turnbull, as Minister for Communications in the Abbott government, radically changed the network's technical aspects. The ACCC began an inquiry into the NBN in November 2017, to investigate whether regulation was needed to improve outcomes for consumers. In November 2017 NBN Co temporarily suspended the rollout of its HFC network due to performance issues, costing Telstra close to $600 million AUD of its 2018 profits. 2018 In February 2018, Turnbull was criticised for connecting The Lodge to the NBN using FTTC technology while neighbours had FTTN, and for having a 100 Mbit/s plan, when he had previously stated that most Australians would not need those speeds. During the South Australia state election, Jay Weatherill promised funding for an alternative network for Adelaide if re-elected. 2019 NBN Co's chief executive Stephen Rue announced in August 2019 that the $51 billion project would be completed within budget by June 2020. At the time of the announcement, the network was 85% complete. 2020 In February 2020, Rue announced that the network was 90% complete. He also reaffirmed his August 2019 announcement that the project would be completed within budget by June 2020. In September 2020, the Coalition announced a $4.5 billion upgrade to the NBN, planning to bring faster internet speeds to up to 8 million people, a move seen as a backflip from the Liberal party after campaigning for seven years against the Labors parties original NBN's plan consisting of FTTP. Infrastructure Point of interconnect (POI) Networks managed by individual RSPs connect to the main NBN infrastructure through points of interconnect (POIs). There are 121 POIs across the country, housed inside Telstra-owned telephone exchanges. Not all exchanges contain POIsdata which reaches a non-POI exchange is transmitted to the closest POI through the NBN transit network. POIs provide Layer 2 network access to the backhaul networks managed by RSPs, thus they act as a "handover point" between NBN and RSP infrastructure. Network termination device (NTD) A network termination device (NTD) is a customer-side network interface device which provides multiple bridges for customers to access the NBN. There are different types of NTDs for the various technologies. All connection types but FTTN use NTDs on premises. Depending on the kind of link, NTDs typically provide two telephony and four data channels. An external power source is required, and an uninterruptible power supply (UPS) can be used to maintain connection in power outages (battery backups are available for the FTTP NTD). FTTC requires power to be provided from the premises to the kerb (distribution point). NTDs provide user–network interface (UNI) connections for connection of in-premises devices, typically though multiple modular jacks. The NTD cannot be used as a Layer 3 router for in-premises networking. NBN technologies The NBN network includes a range of connection technologies for both wired communication (copper, fibre optic, and hybrid fibre-coaxial) and radio communication (satellite and fixed wireless). RSPs connect to these networks at points of interconnect . Access to mobile telecommunication backhauls is also sold to mobile telecommunications providers. Detailed network design rules as required by the Special Access Undertaking agreed by NBN Co and the Australian Competition and Consumer Commission were released on 19 December 2011, with updates on 18 September 2012, 30 June 2016 and 30 June 2017. The NBN "Multi Technology Mix" (MTM) consists of the following network technologies: Wired: Fibre to the building (FTTB)Used for multi-dwelling units and apartment blocks. Equivalent to FTTN, with the "node" located inside the building's communications room. Also known as fibre to the basement. Fibre to the curb (FTTC)Previously called fibre to the distribution point (FTTdp). Fibre connection to a communications pit on the street, then copper to the premises. Replacing Optus HFC from 2017. Also known as fibre to the pit. Fibre to the node (FTTN)A mix of copper and fibre optic technology, providing most NBN connections. Minimum speed of 25 Mbit/s. Fibre to the premises (FTTP)Fibre optic connection using a gigabit passive optical network. Available for greenfield development. Hybrid fibre-coaxial (HFC)Uses a mix of fibre optic and CATV technology largely built on Telstra's HFC network Wireless: Fixed wireless SatelliteSky Muster telecommunications satellites In all technologies, voice services may be provided through Voice over IP with a suitable modem. FTTN and FTTP connections support direct connection of an analog telephone. Fibre to the curb (FTTC) Previously known as fibre to the distribution point (FTTdp). Customers access the FTTC network through copper connections using existing phone plugs. An NBN-provided mains-powered FTTC connection device provides one Ethernet port for connection to a router. FTTC is typically used where an optical fibre cable is already close to each home or business. It is connected to the existing copper lines via a small distribution point unit (DPU) located near the property boundary inside a pit on the street. Each DPU can service up to four connections. The DPU is powered through an NBN-provided mains-powered FTTC connection device (a type of NTD) in each connected premises. Fibre to the node (FTTN) Existing copper connections are maintained with existing plugs being used. The end-user accesses the network using a VDSL2 modem, typically purchased from an RSP. The FTTN network consists of optical fibre from the exchange to a node and a run of copper from the node to the existing distribution area (DA) pillars. A copper pair then runs to each premises. Each node can serve up to 384 homes. Fibre to the premises (FTTP) Fibre-to-the-premises (FTTP) connects customers using Ethernet over a gigabit passive optical network (GPON) from the POI to the premises, giving a peak speed of one gigabit per second. Initially the preferred technological solution, it is an option for greenfield development with limited use for new or replacement connections. In customer premises, the NTD provides user-network interface connections through registered jack sockets for voice (UNI-V, RJ11) and modular jacks for data (UNI-D, RJ45). Voice services can be provided through either UNI-V (which provides for support for Plain Old Telephone Services or POTS), or UNI-D jack (through Voice over IP through data jacks). Data networking in the premises requires a router and/or wireless access point. Network structure A "drop fibre" fibre-optic cable runs from the premises to a "local network" which links a number of premises to a splitter in a fibre distribution hub. A "distribution fibre" cable connects the splitter in the distribution hub to a fibre access node (FAN), which is connected to a POI. Only the fibre access nodes and the equipment on premises require a power supply. The FTTP network is divided into a number of geographical modules. A fibre distribution area includes up to 200 premises linked through a fibre distribution hub. A fibre serving area module comprises 16 fibre distribution areas, which services up to 3,200 premises. A fibre serving area comprises 12 fibre serving area modules connected to a fibre access node, which services up to 38,400 premises. NBN Co planned to build or lease approximately 980 fibre serving areas, servicing up to 37,632,000 premises. Hybrid fibre-coaxial (HFC) Hybrid fibre-coaxial is a legacy technology purchased by NBN Co from Telstra and Optus. The Telstra HFC network is being maintainedit was found that the Optus HFC network was uneconomic to bring up to an acceptable standard, with these connections now to be provided by FTTC. HFC connections use a DOCSIS cable modem as their NTD. The upgrade path for Telstra HFC-connected premises is DOCSIS 3.1. Fixed wireless 2,600 transmission towers connected by microwave and optical fibre to exchanges will use TD-LTE 4G mobile broadband technology to cover around 500,000 premises in rural areas. The premises in the fixed wireless area were to be fitted with a roof-mounted antenna allowing a connection to a wireless base station. NBN Co provides a modem with four UNI-D ports. Telephone connections are by VOIP. Where a copper connection is available users requiring connections during electrical power outages are encouraged to keep that. Network structure A 4G LTE fixed wireless network was to link premises to a base station in turn linked to a POI via a backhaul. The and spectrums were to be used to deliver these fixed wireless services covering approximately 4 per cent of the non-fibre population. Unlike the mobile networks, only premises can connect to NBN's fixed wireless network. 2,600 transmission towers connected by optical fibre to exchanges will provide TD-LTE 4G mobile broadband technology to cover around 500,000 premises. The number of premises assigned to each base station was to be limited to ensure users received "good service" with adequate throughput. Users at the edge of the coverage for each base station were to receive a peak speed of 12 megabits per second. The speed increases 'considerably' closer to the base station. Satellite service Two Sky Muster satellites provide NBN services to locations outside the reach of other technologies, including Christmas Island, Lord Howe and Norfolk Islands. A satellite dish is installed on the premises with NBN Co providing an NTD with four UNI-D ports. Where a copper connection is available users requiring connections during electrical power outages are encouraged to keep that. Sky Muster satellite network NBN Co contracted Space Systems/Loral to build and launch two geostationary Ka band satellites in 2015 at a total cost of , each offering eighty gigabits per second of bandwidth. The first satellite called Sky Muster (NBN-Co 1A) was launched on 1 October 2015. Each Ka band satellite offers eighty gigabits per second of bandwidth. The satellites bounce signals from a satellite dish on the premises to an earth station, known as a 'gateway'; the gateway is then connected to a POI via a fibre backhaul. The satellite design was 'not easy', because the required coverage is about 'five per cent of the world's land mass' containing 'at least 200,000 premises' spread across 'over 7,000 kilometres' of area between Cocos Islands and Norfolk Island. Sky Muster I (NBN-Co 1A) was launched on 1 October 2015 from the Guiana Space Centre in French Guiana, South America, alongside Argentina's ARSAT-2, on an Ariane 5ECA rocket. It became operational in April 2016. Sky Muster II (NBN-Co 1B) was launched on 5 October 2016 to operate in geostationary orbit of 145° East. Technology Choice Program The "Technology Choice Program" provides the option for areas and users to upgrade the technology of their connection (all except HFC connections). Individual premises may switch from any FTTx technology to FTTP. Areas are able to switch from: FTTB to FTTP FTTN to FTTP Fixed wireless to FTTP Fixed wireless to FTTN Fixed wireless to FTTB Satellite to Fixed wireless Satellite to FTTP Satellite to FTTN Satellite to FTTB The Technology Choice Program had limited take-up as of 23 March 2017 with 221 applications received; 113 proceeded to obtaining a quote, and 30 upgrading their connection at an average cost of A$7,395. End-user issues A variety of issues have confronted NBN end users. Performance NBN end users have had difficulty identifying who is responsible for addressing performance issues. Voice over IP VoIP services on FTTN connections require a VDSL modem. Some RSPs allow customers to use their own VDSL modem, and publish the necessary settings to enable VoIP using third party modems. Some modems support both VDSL and ADSL. While older ADSL only modems cannot be used for VDSL (over an FTTN connection), they can be used for FTTP using the WAN port. FTTN FTTN is reliant on: the length of the copper cable. Where the length of copper exceeds 400 m for FTTN (or 150 m for FTTC) speeds drop off. An NBN spokesman said that homes up to 800 metres from a node may only get up to 50 Mbit/s instead of 100, and further than that users can only expect a quarter of the maximum speed, 25 Mbit/s. the quality of the copper connection. Where the copper is degraded service reliability and or speed are affected. Installation issues: relating to existing telephony wiring compatibility with VDSL2. Craig Levy, Chief operating officer at TPG, stated, '... with NBN FTTN we are not allowed to lodge a fault unless the line performs less than 12 Mbit/s sync speed.' FTTN has had issues with VDSL2 modems being incompatible with NBN FTTN. Attempting to connect an incompatible modem automatically locks a user's service and they are required to contact their service provider to unlock it. NBN Co does not publish a list of compliant modems, rejecting a Freedom of Information request for such a list in 2016. HFC HFC is a shared service and subject to network congestion. Satellite Sky Muster satellite connections have issues of regarding response times and limited data allowances. Ongoing extended satellite outages, with ongoing work to improve service. Fixed wireless Fixed wireless connections to the NBN have severe issues regarding bandwidth due to massive oversubscription (NBN design documents are supposed to limit each 'sector' to 56-110 premises however this is routinely exceeded) as well as issues with latency and limited data allowances. Connection speeds The lack of accurate information on broadband speeds is a significant issue for end users according to the Australian Competition & Consumer Commission (ACCC). ACCC initiated a broadband performance monitoring program in May 2017. Retail service providers NBN Co wholesales Layer 2 network access to retail service providers. who retail Internet access and other services. Pricing to RSPs is uniform within each technology regardless of where the service is delivered and across the technologies for the basic package. To provide this uniform pricing, regional and rural areas are cross-subsidised with the lower cost metropolitan areas. However, as RSP costs may vary, retail prices may vary between RSPs. At February 2017, over 50 Retail Service Providers offered services, with three (Telstra, Optus and TPG) sharing 83% of the market. Fibre and fixed wireless plans start from $29.90 per month for the lowest tier of 12/1 megabit per second download/upload speeds and the highest speed tier of 1000/400 for $330 per month. Connectivity To prevent other potential providers from undercutting NBN Co in metropolitan areas, new fibre networks are required to be open access and charge similar prices; these rules are known as the 'anti-cherry picking' provisions, which were enacted with other NBN legislation. Political and industry responses Political responses When the Rudd Labor opposition first broached its NBN proposal, it was dismissed by the Howard government as unnecessary. It was argued that an upgrade to the current copper networks where 'commercial solutions were not always viable' would address the need. In 2009, the Coalition described the NBN as a 'dangerous delusion'. National Party Senator Barnaby Joyce said the NBN mirrored a proposal released by their think tank, saying it 'delivers a strategic infrastructure outcome' and it is 'vitally important that the [NBN] gets to the corners of our country where the market has failed'. Independent MPs Rob Oakeshott, Tony Windsor, Bob Katter and Andrew Wilkie expressed support for NBN. Katter said the NBN is 'a great thing for this country'. Windsor said the NBN is 'too good an opportunity to miss'. Family First Party leader Steve Fielding said the NBN will 'bring [Australia] up to speed'. The size of the government's investment was a key point of debate. The Coalition called for a cost-benefit analysis, describing the NBN as 'a white elephant on a massive scale'. The Economist Intelligence Unit (EIU) criticised the NBN as a 'huge cost to the public sector', instead advocating a policy focused on filling 'gaps'. The Greens said the NBN 'is a key piece of 21st century infrastructure'. Telecommunications stakeholders Telecommunication companies Optus, iiNet, Internode and Vodafone-Hutchison expressed support for the project, along with the Australian Internet Industry Association, which said optical fibre solutions are 'a critical part in the evolution of the internet'. Internode managing director Simon Hackett said he was 'glad [the NBN is] going to go ahead'. Vodafone-Hutchison CEO Nigel Dews said the NBN will 'transform the competitive landscape'. However, other telecommunications companies including AAPT, PIPE Networks and Exetel have expressed opposition to the NBN. AAPT chief executive Paul A. Broad said the NBN will 'stifle competition'. PIPE Networks founder Bevan Slattery said the NBN is 'economically irresponsible'. Exetel chief executive John Linton described the NBN as a 'political stunt'. Microsoft, Google and Intel expressed support. Google's head of engineering, Alan Noble, said the NBN will 'be the greatest enabler of innovation'. Intel managing director Philip Cronin said 'the NBN has the potential to deliver significant long term benefits'. The Swinburne University of Technology conducted a survey of Australian Internet usage for the World Internet Project between and . The survey of 1,000 people asked about Internet usage and how it influences daily life. A question was included asking if the NBN was a 'good idea'; 74.5 per cent agreed. In the survey, the NBN had stronger support among younger people and Internet users. Australian Competition and Consumer Commission (ACCC) Given the market failure found in attempting to develop and implement the replacement telecommunications network a natural monopoly was set up with ACCC safeguarding the interests of end users, access seekers and NBN Co. This is achieved through NBN Co entering into a Special Access Undertaking to govern aspects of NBN design, service and charging until 2040, agreed 2013. Coverage Some NBN areas are 'brownfield' that were provided directly by NBN Co, but many were new 'greenfield' areas in subdivisions that did not have an existing service. Greenfield developments must provide fibre connections, at a cost to the developer (and thus homeowner). Most brownfield areas are near existing exchanges, which presumably already had ADSL. The NBN Corporate Plan 2011-2013 estimated approximately 13 million premises would be covered by the NBN, 12 million using fibre (FTTP). However this plan was abandoned by the incoming Liberal government in 2013. Blackspots In February 2014 the government produced a new MyBroadband website that provided information about access. It showed that 1.6 million premises across Australia either have no access to fixed broadband or very poor broadband connectivity. Customer uptake Premises are considered 'passed' when 'all design, construction, commissioning and quality assurance activities in a FSAM (Fibre Servicing Area Module) have been completed for the Local network and Distribution network'. Certain premises classed as 'service class zero' that require extra internal construction such as apartments, town houses, shopping arcades and industrial complexes may not be able to order services, even though their premises had been passed. NBN Co's stated their usage of 'premises passed' was an 'accepted industry definition'. However, their corporate plan defined 'premises passed' as places where 'NBN services may be ordered and purchased', causing NBN Co to be accused of 'creative accounting'. No aggregate data summarises broadband speeds possible for connectable premises, which has been identified as a significant issue for end users by ACCC. Total number of active NBN connections by access technology Speed tier percentage per NBN connection type Shows the speed tier as a percentage of total active connections of each NBN connection type (fixed line, satellite and wireless). A customer can choose any speed tier that is available for their service; the speed tier is the maximum download/upload limit of the service. Half-yearly data The tabulated data of half-yearly statistics for active NBN connections. An active NBN connection is where construction has been completed and a customer has ordered and activated an NBN service. See also Internet in Australia National Broadband Plan in the United States Telecommunications in Australia Ultra-Fast Broadband programme in New Zealand References Notes Sources External links Rollout Map Infrastructure in Australia Abbott Government Rudd Government Turnbull Government
26438505
https://en.wikipedia.org/wiki/KolibriOS
KolibriOS
KolibriOS, or Kolibri, is a small, open-source x86 operating system written completely in assembly. It was forked from MenuetOS in 2004 and has run under independent development since. In a 2009 review piece on alternative operating systems, TechRadar called it "tremendously impressive", noting its performance and streamlined codebase. Features Pre-emptive multitasking, streams, and parallel execution of system calls Boots in a few seconds from various devices, with support for NTFS and Ext2/3/4; can also boot from Coreboot and Windows (Windows will shut down) Graphical user interface based on, and optimised for, VESA Development kit: code editor with an integrated macro assembler (FASM) Most distributions will fit on a single 1.44 MB floppy disk image Commands The following is a list of commands supported by the KolibriOS Shell: about alias cd clear cp date echo exit free help history kill ls mkdir more ps pwd reboot rm rmdir shutdown sleep touch uptime ver System requirements i586-compatible CPU 8 MB of RAM VESA-compatible graphics card 1.44 MB 3.5" floppy drive, hard disk drive, USB Flash drive or CD-ROM drive Keyboard and mouse (COM, PS/2 or USB) Supported hardware USB 1.1 and 2.0 are supported (UHCI, OHCI and EHCI). There is also support for USB hubs, although the only USB HID devices supported include keyboard, mouse and USB flash drives. Storage: internal hard disks with PATA/IDE and SATA/ACHI interfaces are supported natively. File systems: supported file systems include FAT12, FAT16, FAT32 (long names support), ext2, ext3 and ext4 (partially), CDFS (including multisession, read-only), NTFS (read and write but no extended functions such as encryption) and XFS (read-only) Audio: AC'97 audio codec support for Intel, nForce, nForce2, nForce3, nForce4, SIS7012, FM801, VT8233, VT8233C, VT8235, VT8237, VT8237R, VT8237R Plus and EMU10K1X chipsets. Intel High Definition Audio is supported on certain motherboards. Video: works on any card and specific drivers are available for AMD and Intel chipsets Network: TCP/IP stack and certain Ethernet network cards support Development branches KolibriACPI: extended ACPI support Kolibri-A: Exokernel version of KolibriOS optimized for embedded applications and hardware engineering; only few AMD APU-based platforms are currently supported. Reception Dedoimedo.com reviewed KolibriOS in 2012: Jesse Smith from DistroWatch Weekly wrote the following review about KolibriOS in 2009: See also MenuetOS - MenuetOS 32-bit version is a base of KolibriOS References Further reading Kolibri-A: a lightweight 32-bit OS for AMD platforms—University of Exeter, PCCAT 2011 p. 20-22 (2011) External links Floppy disk-based operating systems Free software operating systems Assembly language software X86 operating systems Lightweight Unix-like systems Hobbyist operating systems
60451242
https://en.wikipedia.org/wiki/Ophelestes
Ophelestes
In Greek mythology, Ophelestes (Ancient Greek: Ὀφελέστης) may refer to the following individuals: Ophelestes, one of the Trojan warriors who attacked the Greek fleet during the tenth year of the Trojan War. He was killed by an arrow of Teucer, one of the leaders of the Salaminian troops. Ophelestes, a Paeonian soldier killed by Achilles beside the Scamander River during the Trojan War. Along with his fellow Paeonians, he was killed by the Achaean hero with his spear. Notes References Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Parada, Carlos, Genealogical Guide to Greek Mythology, Jonsered, Paul Åströms Förlag, 1993. . Characters in Greek mythology
62597108
https://en.wikipedia.org/wiki/Death%20Come%20True
Death Come True
is a 2020 interactive film adventure game developed by Too Kyo Games and Esquadra and published by IzanagiGames for Android, iOS, macOS, Nintendo Switch, Windows, and PlayStation 4. The game was written and directed by Kazutaka Kodaka, better known as the creator of the Danganronpa series. Synopsis Death Come True is an interactive film adventure game in which the player is tasked with finding clues and uncovering the truth of the protagonist's actions. The player has the ability to "time leap" with Karaki, who can be sent to the past. Story The game's plot follows Makoto Karaki (Kanata Hongō) as he wakes up in a strange hotel room after having lost all of his memories. When turning on the television, he finds out that he is a serial killer. As Karaki investigates the hotel, he finds it impossible to leave as all of the exits are sealed. He also meets two police investigators, Akane Sachimura (Chiaki Kuriyama) and Nozomu Kuji (Win Morisaki) who are investigating him on the suspicion that he is a serial killer wanted for strangling numerous women. However, Karaki finds himself inexplicably drawn to Sachimura. After dying several times in the course of his investigations, Karaki eventually begins to recover some of his memories and confronts the hotel Concierge (Yuki Kaji), questioning if the hotel is even real. The Concierge reveals that the hotel is actually a virtual reality space created by the "Come True" system, which is capable of recreating memories from living or deceased individuals that others can view through virtual reality reconstruction. While intended for the treatment of mental illnesses, the Come True is secretly being used for police investigations. Karaki himself is actually another Investigator who connected the real Karaki and Sachimura, both deceased in the real world, to find the truth behind Sachimura's death. The Concierge then warns the Investigator that an unknown party is hacking Come True in an attempt to delete the hotel, which also caused the Investigator to lose his memories and mistakenly believe himself to be Karaki. The Concierge suggests that the Investigator escape the hotel, which he always had the choice to do whenever he dies. However, the Investigator continues his search for the truth. With more of his memories coming back, he and Sachimura confront Kuji, suspecting him to be Sachimura's real killer. Sachimura suspected Kuji of corruption, so he made a deal with Karaki to murder her in return for passage out of the country, but Kuji betrayed him and killed him instead. The hacker, the real Kuji, then appears and admits that he is indeed guilty. However, he gives the Investigator a proposition. Rather than return to the real world and reveal Kuji's crimes, he can instead stay in the virtual world with Sachimura forever. If the Investigator chooses to stay in the virtual world, Kuji escapes, though he warns the pair not to leave the hotel since the virtual world outside is unknown. The Investigator and Sachimura decide to leave the hotel, and live happily together in an unstable virtual world created from Sachimura's memories. If the Investigator chooses to leave the virtual world, he follows clues left behind by Sachimura to find the evidence she gathered proving Kuji's corruption. As a result, Kuji is arrested and investigated for his involvement in Sachimura's murder and the killings performed by Karaki. Development and release Death Come True was developed by Too Kyo Games and Esquadra and published by IzanagiGames, written and directed by Kazutaka Kodaka with music by Masafumi Takada. Esquadra was responsible for the primary development work. It was released for Android, iOS, macOS, and Nintendo Switch on June 24 in Japan, June 25 in North America, June 26 in Europe, and June 27, 2020 in Australia. It was also released for Windows on July 17, 2020, and for PlayStation 4 on November 12, 2020. Reception Death Come True was generally well received by critics on Nintendo Switch, while the iOS version saw mixed reviews. Notes References External links 2020 video games 2020s interactive fiction Adventure games Android (operating system) games Full motion video based games Interactive movie video games IOS games MacOS games Nintendo Switch games PlayStation 4 games Single-player games Video games about amnesia Video games developed in Japan Video games scored by Masafumi Takada Windows games
457680
https://en.wikipedia.org/wiki/Spline%20%28mathematics%29
Spline (mathematics)
In mathematics, a spline is a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomials, while avoiding Runge's phenomenon for higher degrees. In the computer science subfields of computer-aided design and computer graphics, the term spline more frequently refers to a piecewise polynomial (parametric) curve. Splines are popular curves in these subfields because of the simplicity of their construction, their ease and accuracy of evaluation, and their capacity to approximate complex shapes through curve fitting and interactive curve design. The term spline comes from the flexible spline devices used by shipbuilders and draftsmen to draw smooth shapes. Introduction The term "spline" is used to refer to a wide class of functions that are used in applications requiring data interpolation and/or smoothing. The data may be either one-dimensional or multi-dimensional. Spline functions for interpolation are normally determined as the minimizers of suitable measures of roughness (for example integral squared curvature) subject to the interpolation constraints. Smoothing splines may be viewed as generalizations of interpolation splines where the functions are determined to minimize a weighted combination of the average squared approximation error over observed data and the roughness measure. For a number of meaningful definitions of the roughness measure, the spline functions are found to be finite dimensional in nature, which is the primary reason for their utility in computations and representation. For the rest of this section, we focus entirely on one-dimensional, polynomial splines and use the term "spline" in this restricted sense. Definition We begin by limiting our discussion to the univariate polynomial case. In this case, a spline is a piecewise polynomial function. This function, call it S, takes values from an interval [a,b] and maps them to , the set of real numbers, We want S to be piecewise defined. To accomplish this, let the interval [a,b] be covered by k ordered, disjoint subintervals, On each of these k "pieces" of [a,b], we want to define a polynomial, call it Pi. . On the ith subinterval of [a,b], S is defined by Pi, The given k+1 points ti are called knots. The vector is called a knot vector for the spline. If the knots are equidistantly distributed in the interval [a,b] we say the spline is uniform, otherwise we say it is non-uniform. If the polynomial pieces Pi each have degree at most n, then the spline is said to be of degree (or of order n+1). If in a neighborhood of ti, then the spline is said to be of smoothness (at least) at ti. That is, at ti the two pieces Pi-1 and Pi share common derivative values from the derivative of order 0 (the function value) up through the derivative of order ri (in other words, the two adjacent polynomial pieces connect with loss of smoothness of at most n - ri). A vector such that the spline has smoothness at ti for is called a smoothness vector for the spline. Given a knot vector , a degree n, and a smoothness vector for , one can consider the set of all splines of degree having knot vector and smoothness vector . Equipped with the operation of adding two functions (pointwise addition) and taking real multiples of functions, this set becomes a real vector space. This spline space is commonly denoted by . In the mathematical study of polynomial splines the question of what happens when two knots, say ti and ti+1, are moved together has an easy answer. The polynomial piece Pi(t) disappears, and the pieces Pi−1(t) and Pi+1(t) join with the sum of the continuity losses for ti and ti+1. That is, where This leads to a more general understanding of a knot vector. The continuity loss at any point can be considered to be the result of multiple knots located at that point, and a spline type can be completely characterized by its degree n and its extended knot vector where ti is repeated ji times for . A parametric curve on the interval [a,b] is a spline curve if both X and Y are spline functions of the same degree with the same extended knot vectors on that interval. Examples Suppose the interval [a,b] is [0,3] and the subintervals are [0,1], [1,2], and [2,3]. Suppose the polynomial pieces are to be of degree 2, and the pieces on [0,1] and [1,2] must join in value and first derivative (at t=1) while the pieces on [1,2] and [2,3] join simply in value (at t = 2). This would define a type of spline S(t) for which would be a member of that type, and also would be a member of that type. (Note: while the polynomial piece 2t is not quadratic, the result is still called a quadratic spline. This demonstrates that the degree of a spline is the maximum degree of its polynomial parts.) The extended knot vector for this type of spline would be (0, 1, 2, 2, 3). The simplest spline has degree 0. It is also called a step function. The next most simple spline has degree 1. It is also called a linear spline. A closed linear spline (i.e, the first knot and the last are the same) in the plane is just a polygon. A common spline is the natural cubic spline of degree 3 with continuity C2. The word "natural" means that the second derivatives of the spline polynomials are set equal to zero at the endpoints of the interval of interpolation This forces the spline to be a straight line outside of the interval, while not disrupting its smoothness. Algorithm for computing natural cubic splines Cubic splines are of the form . Given set of coordinates we wish to find set of splines for These must satisfy: . Let us define one cubic spline as a 5-tuple where and correspond to coefficients in the form shown earlier and is equal to Algorithm for computing Natural Cubic Splines: Input: set of coordinates , with Output: set splines which is composed of n 5-tuples. Create new array a of size n + 1 and for set Create new arrays b and d each of size n. Create new array h of size n and for set Create new array α of size n and for set . Create new arrays c, l, μ, and z each of size . Set For Set . Set . Set . Set For Set Set Set Create new set Splines and call it output_set. Populate it with n splines S. For Set Si,a = ai Set Si,b = bi Set Si,c = ci Set Si,d = di Set Si,x = xi Output output_set Notes It might be asked what meaning more than n multiple knots in a knot vector have, since this would lead to continuities like at the location of this high multiplicity. By convention, any such situation indicates a simple discontinuity between the two adjacent polynomial pieces. This means that if a knot ti appears more than n + 1 times in an extended knot vector, all instances of it in excess of the (n + 1)th can be removed without changing the character of the spline, since all multiplicities n + 1, n + 2, n + 3, etc. have the same meaning. It is commonly assumed that any knot vector defining any type of spline has been culled in this fashion. The classical spline type of degree n used in numerical analysis has continuity which means that every two adjacent polynomial pieces meet in their value and first n - 1 derivatives at each knot. The mathematical spline that most closely models the flat spline is a cubic (n = 3), twice continuously differentiable (C2), natural spline, which is a spline of this classical type with additional conditions imposed at endpoints a and b. Another type of spline that is much used in graphics, for example in drawing programs such as Adobe Illustrator from Adobe Systems, has pieces that are cubic but has continuity only at most This spline type is also used in PostScript as well as in the definition of some computer typographic fonts. Many computer-aided design systems that are designed for high-end graphics and animation use extended knot vectors, for example Autodesk Maya. Computer-aided design systems often use an extended concept of a spline known as a Nonuniform rational B-spline (NURBS). If sampled data from a function or a physical object is available, spline interpolation is an approach to creating a spline that approximates that data. General Expression For a C2 Interpolating Cubic Spline The general expression for the ith C2 interpolating cubic spline at a point x with the natural condition can be found using the formula where are the values of the second derivative at the ith knot. are the values of the function at the ith knot. Representations and Names For a given interval [a,b] and a given extended knot vector on that interval, the splines of degree n form a vector space. Briefly this means that adding any two splines of a given type produces spline of that given type, and multiplying a spline of a given type by any constant produces a spline of that given type. The dimension of the space containing all splines of a certain type can be counted from the extended knot vector: The dimension is equal to the sum of the degree plus the multiplicities If a type of spline has additional linear conditions imposed upon it, then the resulting spline will lie in a subspace. The space of all natural cubic splines, for instance, is a subspace of the space of all cubic C2 splines. The literature of splines is replete with names for special types of splines. These names have been associated with: The choices made for representing the spline, for example: using basis functions for the entire spline (giving us the name B-splines) using Bernstein polynomials as employed by Pierre Bézier to represent each polynomial piece (giving us the name Bézier splines) The choices made in forming the extended knot vector, for example: using single knots for Cn-1 continuity and spacing these knots evenly on [a,b] (giving us uniform splines) using knots with no restriction on spacing (giving us nonuniform splines) Any special conditions imposed on the spline, for example: enforcing zero second derivatives at a and b (giving us natural splines) requiring that given data values be on the spline (giving us interpolating splines) Often a special name was chosen for a type of spline satisfying two or more of the main items above. For example, the Hermite spline is a spline that is expressed using Hermite polynomials to represent each of the individual polynomial pieces. These are most often used with n = 3; that is, as Cubic Hermite splines. In this degree they may additionally be chosen to be only tangent-continuous (C1); which implies that all interior knots are double. Several methods have been invented to fit such splines to given data points; that is, to make them into interpolating splines, and to do so by estimating plausible tangent values where each two polynomial pieces meet (giving us Cardinal splines, Catmull-Rom splines, and Kochanek-Bartels splines, depending on the method used). For each of the representations, some means of evaluation must be found so that values of the spline can be produced on demand. For those representations that express each individual polynomial piece Pi(t) in terms of some basis for the degree n polynomials, this is conceptually straightforward: For a given value of the argument t, find the interval in which it lies Look up the polynomial basis chosen for that interval Find the value of each basis polynomial at t: Look up the coefficients of the linear combination of those basis polynomials that give the spline on that interval c0, ..., ck-2 Add up that linear combination of basis polynomial values to get the value of the spline at t: However, the evaluation and summation steps are often combined in clever ways. For example, Bernstein polynomials are a basis for polynomials that can be evaluated in linear combinations efficiently using special recurrence relations. This is the essence of De Casteljau's algorithm, which features in Bézier curves and Bézier splines. For a representation that defines a spline as a linear combination of basis splines, however, something more sophisticated is needed. The de Boor algorithm is an efficient method for evaluating B-splines. History Before computers were used, numerical calculations were done by hand. Although piecewise-defined functions like the sign function or step function were used, polynomials were generally preferred because they were easier to work with. Through the advent of computers splines have gained importance. They were first used as a replacement for polynomials in interpolation, then as a tool to construct smooth and flexible shapes in computer graphics. It is commonly accepted that the first mathematical reference to splines is the 1946 paper by Schoenberg, which is probably the first place that the word "spline" is used in connection with smooth, piecewise polynomial approximation. However, the ideas have their roots in the aircraft and shipbuilding industries. In the foreword to (Bartels et al., 1987), Robin Forrest describes "lofting", a technique used in the British aircraft industry during World War II to construct templates for airplanes by passing thin wooden strips (called "splines") through points laid out on the floor of a large design loft, a technique borrowed from ship-hull design. For years the practice of ship design had employed models to design in the small. The successful design was then plotted on graph paper and the key points of the plot were re-plotted on larger graph paper to full size. The thin wooden strips provided an interpolation of the key points into smooth curves. The strips would be held in place at discrete points (called "ducks" by Forrest; Schoenberg used "dogs" or "rats") and between these points would assume shapes of minimum strain energy. According to Forrest, one possible impetus for a mathematical model for this process was the potential loss of the critical design components for an entire aircraft should the loft be hit by an enemy bomb. This gave rise to "conic lofting", which used conic sections to model the position of the curve between the ducks. Conic lofting was replaced by what we would call splines in the early 1960s based on work by J. C. Ferguson at Boeing and (somewhat later) by M.A. Sabin at British Aircraft Corporation. The word "spline" was originally an East Anglian dialect word. The use of splines for modeling automobile bodies seems to have several independent beginnings. Credit is claimed on behalf of de Casteljau at Citroën, Pierre Bézier at Renault, and Birkhoff, Garabedian, and de Boor at General Motors (see Birkhoff and de Boor, 1965), all for work occurring in the very early 1960s or late 1950s. At least one of de Casteljau's papers was published, but not widely, in 1959. De Boor's work at General Motors resulted in a number of papers being published in the early 1960s, including some of the fundamental work on B-splines. Work was also being done at Pratt & Whitney Aircraft, where two of the authors of (Ahlberg et al., 1967) — the first book-length treatment of splines — were employed, and the David Taylor Model Basin, by Feodor Theilheimer. The work at General Motors is detailed nicely in (Birkhoff, 1990) and (Young, 1997). Davis (1997) summarizes some of this material. References Ferguson, James C, Multi-variable curve interpolation, J. ACM, vol. 11, no. 2, pp. 221-228, Apr. 1964. Ahlberg, Nielson, and Walsh, The Theory of Splines and Their Applications, 1967. Birkhoff, Fluid dynamics, reactor computations, and surface representation, in: Steve Nash (ed.), A History of Scientific Computation, 1990. Bartels, Beatty, and Barsky, An Introduction to Splines for Use in Computer Graphics and Geometric Modeling, 1987. Birkhoff and de Boor, Piecewise polynomial interpolation and approximation, in: H. L. Garabedian (ed.), Proc. General Motors Symposium of 1964, pp. 164–190. Elsevier, New York and Amsterdam, 1965. Davis, B-splines and Geometric design, SIAM News, vol. 29, no. 5, 1997. Epperson, History of Splines, NA Digest, vol. 98, no. 26, 1998. Stoer & Bulirsch, Introduction to Numerical Analysis. Springer-Verlag. p. 93-106. ISBN 0387904204 Schoenberg, Contributions to the problem of approximation of equidistant data by analytic functions, Quart. Appl. Math., vol. 4, pp. 45–99 and 112–141, 1946. Young, Garrett Birkhoff and applied mathematics, Notices of the AMS, vol. 44, no. 11, pp. 1446–1449, 1997. Chapra, Canale, "Numerical Methods for Engineers" 5th edition. External links Theory Cubic Splines Module Prof. John H. Mathews California State University, Fullerton An Interactive Introduction to Splines, ibiblio.org Excel Function XLL Excel Addin Function Implementation of cubic spline Online utilities Online Cubic Spline Interpolation Utility Learning by Simulations Interactive simulation of various cubic splines Symmetrical Spline Curves, an animation by Theodore Gray, The Wolfram Demonstrations Project, 2007. Computer Code Notes, PPT, Mathcad, Maple, Mathematica, Matlab, Holistic Numerical Methods Institute various routines, NTCC Sisl: Opensource C-library for NURBS, SINTEF VBA Spline Interpolation, vbnumericalmethods.com Interpolation
33029530
https://en.wikipedia.org/wiki/Stephanie%20Thacker
Stephanie Thacker
Stephanie Dawn Thacker (born August 22, 1965) is a United States Circuit Judge of the United States Court of Appeals for the Fourth Circuit. Early life and education Born Stephanie Dawn Young in Huntington, West Virginia, she was raised in Hamlin, West Virginia. Thacker earned a Bachelor of Arts degree in marketing, magna cum laude, from Marshall University in 1987 and her Juris Doctor, with honors, from West Virginia University in 1990. Professional career After graduating from law school, Thacker spent two years working in the Pittsburgh office of the law firm Kirkpatrick & Lockhart (now K&L Gates). She then worked briefly for the West Virginia Office of the Attorney General before joining the law firm King, Betts & Allen. In 1994, Thacker took a job in the United States Attorney's office for the Southern District of West Virginia, serving as an Assistant United States Attorney in the Criminal Division and handling a wide range of criminal prosecutions. In 1999, Thacker moved to Washington, D.C. to work as a trial attorney in the United States Department of Justice's Child Exploitation and Obscenity Section. She worked there for seven years, serving as Deputy Chief of Litigation for two years and then as Principal Deputy Chief of Litigation for five years. She also was part of the team that prosecuted the first case the United States ever brought involving the Violence Against Women Act. In 2006, Thacker joined the Charleston, West Virginia, law firm Guthrie & Thomas as a partner. Federal judicial service In July 2011, the West Virginia Record reported that President Obama would select Thacker to the judicial vacancy on the United States Court of Appeals for the Fourth Circuit that had been created by the death of Judge M. Blane Michael. Michael had died in March 2011. On September 8, 2011, Obama formally nominated Thacker to be a judge on the Fourth Circuit. The Judiciary Committee reported her nomination to the floor of the Senate on November 3, 2011. The United States Senate confirmed her nomination by a vote of 91–3 on April 16, 2012. She received her commission on April 17, 2012. In October 2017, Thacker wrote for the panel majority when it found that the Bladensburg Peace Cross memorial from World War I now violated the Constitution's Establishment Clause and ordering either its arms removed or the entire monument razed. Her judgement was ultimately reversed by the U.S. Supreme Court in American Legion v. American Humanist Association (2019). In April 2018, Thacker wrote for the majority when it found that a Maryland law prohibiting price gouging in prescription drug prices violated the constitution's Dormant Commerce Clause. See also List of first women lawyers and judges in West Virginia References External links 1965 births 21st-century American judges American women judges Assistant United States Attorneys Judges of the United States Court of Appeals for the Fourth Circuit Living people Marshall University alumni Pennsylvania lawyers People from Hamlin, West Virginia Lawyers from Huntington, West Virginia United States court of appeals judges appointed by Barack Obama United States Department of Justice lawyers West Virginia University College of Law alumni 21st-century women judges 21st-century American women
67798642
https://en.wikipedia.org/wiki/Olivetti%20computers
Olivetti computers
The Olivetti company, an Italian manufacturer of computers, tablets, smartphones, printers and other such business products as calculators and fax machines, was founded as a typewriters manufacturer by Camillo Olivetti in 1908 in the Turin commune of Ivrea, Italy. Olivetti was a pioneer in computer development, starting with the mainframe systems in the 1950s, and continuing into the 1990s with PC compatible laptops and desktops. History 1950-60s Between 1955 and 1964 Olivetti developed some of the first transistorized mainframe computer systems, such as the Elea 9003. Although 40 large commercial 9003 and over 100 smaller 6001 scientific machines were completed and leased to customers to 1964, low sales, loss of two key managers and financial instability caused Olivetti to withdraw from the field in 1964. In 1965, Olivetti released the Programma 101, considered one of the first commercial desktop programmable calculators. It was saved from the sale of the computer division to GE thanks to an employee, Gastone Garziera, who spent successive nights changing the internal categorization of the product from "computer" to "calculator", so leaving the small team in Olivetti and creating some awkward situations in the office, since that space was now owned by GE. 1970s In 1974, the firm released the TC800, an intelligent terminal designed to be attached to a mainframe and used in the finance sector. It was followed in 1977 by the TC1800. 1980s Olivetti's first modern personal computer, the M20, featuring a Zilog Z8000 CPU, was released in 1982. The M20 was followed in 1983 by the M24, a clone of the IBM PC using DOS and the Intel 8086 processor (at 8 MHz) instead of the Intel 8088 used by IBM (at 4.77 MHz). The M24 was sold in North America as the AT&T 6300. Olivetti also manufactured the AT&T 6300 Plus, which could run both DOS and Unix. The M24 in the US also was sold as Xerox 6060. The Olivetti M28 was the firm's first PC to have the Intel 80286 processor. The same year Olivetti produced its M10 laptop computer, a 8085-based workalike of the successful Radio Shack TRS-80 Model 100, which it marketed in Europe. These were the first laptops to sell in million-unit quantities, though the itself only attained sales figures in the tens of thousands and went out of production within two years. In 1985, the company acquired a controlling share in the British computer manufacturer Acorn Computers Ltd; a third partner was Thomson SA. Olivetti sold the Thomson MO6 and Acorn BBC Master Compact with brand names Olivetti Prodest PC128 and PC128S respectively. In 1987, Olivetti introduced the LSX line of computers which was based on the Motorola 68k processor. They could run either MOS or Olivetti's Unix, X/OS. In 1988, Olivetti released the M380/C, part of the Pandora project - an experimental system for multimedia applications. They also released the PC1 Prodest and PC1 HD (XT clones, similar to the Schneider Euro PC). In 1989, the Olivetti M290S was released, featuring an Intel 80286 at 12 MHz, 1 MB of RAM and a 30 MB hard drive. That same year the company presented their 80486 based next generation of LSX workstations, with the Olivetti LSX 5020 (with EISA-Bus) being shown in CeBit as Computing Platform CP486. 1990s In 1990, Olivetti had its own distribution network in New Zealand through Essentially Software Ltd. (owned by Gary McNabb) located at Mt. Eden in Auckland and Wellington, where Olivetti M300-100 16 MHz PCs with 80386SX CPU were sold for NZ$7395 and used as graphical work station for design houses using Corel Draw as graphical program. The New Zealand distribution stopped in 1991 when Olivetti could not supply their PCs. In 1991, Olivetti introduced the D33, a laptop in a carry case and the PCS 286S, a PC with VGA monitor and keyboard. Olivetti also sold quasi-portable 8086/8088-based PCs with an integrated keyboard and one or two integrated 3.5" floppy disk drives, running DOS 3.27, an Olivetti OEM version of PC DOS 3.20 with minor improvements like the M21 portable (based on M24) and the M15. Also later Olivetti produced interesting laptops like M111, M211, S20, D33, Philos and Echos series. A very interesting subnotebook was the Quaderno, about the same size as an A5 paper – it was the grandfather of the netbooks introduced 20 years later. Olivetti did attempt to recover its position by introducing the Envision in 1995, a full multimedia PC, to be used in the living room; this project was a failure. Packard Bell managed to successfully introduce a similar product in the U.S. but only some years later. The company continued to develop personal computers until it sold its PC business in 1997. Models Peripherals PR40, PR2, PR2-e, PR2+, PR2-10 Scanner Printer which was used in banking sectors PG-series and PGL-series - black and white digital printers d-Color p-series color digital printers A3 and A4 series MFP See also Macchine per scrivere della Olivetti : the list of every model of Olivetti typewriter and related article on the Italian Wikipedia Olivetti S.p.A. Olivetti typewriters References Olivetti S.p.A. Olivetti computers Olivetti personal computers
42967104
https://en.wikipedia.org/wiki/OpenMandriva%20Lx
OpenMandriva Lx
OpenMandriva Lx is a Linux distribution forked from Mandriva Linux. It is maintained by the OpenMandriva Association. History Origin of the distribution OpenMandriva Lx is a community Linux distribution. Originally an offering of Mandriva Linux, the OpenMandriva product was created in May, 2012, when Mandriva S.A. avoided bankruptcy by abandoning the development of its consumer product to the Mandriva community. The first stable version (OpenMandriva Lx 2013 "Oxygen") was released in late 2013. OpenMandriva Association The OpenMandriva Association was established on December 12, 2012 under 1901 French law, to represent the OpenMandriva Community. It manages free software projects including OpenMandriva Lx. OpenMandriva Lx development environment OpenMandriva Lx's development environment is an ABF (Automated Build Farm) which can manage the source codes, compile it to binaries. Also ABF creates the package repository and ISO images. Versions In late 2013, the first version of OpenMandriva Lx was released. It was based on Mandriva Linux 2011, which was itself an amalgamation of ROSA Linux and Mandriva SA. OpenMandriva Lx 2014 "Phosphorus" was released on 1 May 2014. The release had a very positive review from one of the founders of the initial Mandrake Linux distribution, Gaël Duval. OpenMandriva Lx 2014.2, codenamed "The Scion", a bugfix release for 2014.1, was released on 29 June 2015. During 2015, OpenMandriva released an alpha version of OpenMandriva Lx 2015. As the operating system was developed all 2015 year, in 2016, the version was released as OpenMandriva Lx 3.0 Beta. This new release came with significant changes to the core system — among other things, it was the first desktop Linux distribution that was built completely with the Clang compiler instead of GCC. A stable and final release of OpenMandriva Lx 3.0 was released in August 2016, followed by 3.01 in December 2016 and 3.02 in June 2017. This was followed by OpenMandriva Lx 3.03, which was released in November 2017. After releasing OpenMandriva Lx 3.03, OpenMandriva Lx developers started dropping support of i586 processor architecture in OpenMandriva Lx 4.0. The development of OpenMandriva Lx 4.0 continued on for two years after Lx 3.03 was released. The OpenMandriva Lx 4.0 stable release arrived in June 2019. Among many other changes, this release is notable for switching to the DNF package manager for software management, including Mageia's dnfdragora to replace rpmdrake. Version history Development versions OpenMandriva Lx 2013.0 OpenMandriva Lx 2013.0 was released on November 22, 2013 with codename "Oxygen". This version was the first release of OpenMandriva Lx, and was the fork of Mandriva Linux 2011.0. The version includes KDE 4.11.2. Menu in OpenMandriva Lx 2013.0 was SimpleWelcome, and also this version included media player ROSA Media Player 1.6, Mozilla Firefox 25.0, LibreOffice 4.1.3 and the Linux kernel 3.11.6. OpenMandriva Lx 4.0 OpenMandriva Lx 4.0 will support ARM64 (aarch64) and ARM v7 (armv7hnl) architectures. After arriving RISC-V board, OpenMandriva will start porting distribution to open source CPUs. The first alpha version of OpenMandriva Lx 4.0 was released on September 6, 2018. The OpenMandriva Lx 4.0 Major Release Alpha 1 was released on December 25, 2018, the Beta was released on February 9, 2019 and the Release Candidate was released on May 12, 2019. The OpenMandriva Lx 4.0 final release was announced on 16 June 2019. In this version, OpenMandriva Association announced that OpenMandriva Lx 4.0 have switched to RPMv4 and dnf has replaced urpmi. Due to that, rpmdrake also has replaced by Dnfdragora. Screenshots More screenshots OpenMandriva Gallery OpenMandriva Lx 4.0 screenshots See also ROSA Linux Mageia PCLinuxOS References External links OpenMandriva Lx Forum OpenMandriva Lx Wiki OpenMandriva Lx Bugzilla OpenMandriva Matrix Room OpenMandriva on OpenSourceFeed Gallery KDE Mandriva Linux RPM-based Linux distributions X86-64 Linux distributions Linux distributions
63614130
https://en.wikipedia.org/wiki/Wallpaper%20Engine
Wallpaper Engine
Wallpaper Engine is an application for Windows with a companion app on Android which allows users to use and create animated and interactive wallpapers, similar to the defunct Windows DreamScene. Wallpapers are shared through the Steam Workshop functionality as user-created downloadable content. It features its own rendering engine and provides a wallpaper editor, allowing for the creation of 2D and 3D wallpapers, including a particle system editor and a fork of JavaScript called SceneScript for additional wallpaper logic. It also supports using video files, audio files, web-pages and some 3D applications as wallpapers. History A proposal outlining the general idea of the software was added to Steam Greenlight in December 2015. The application was subsequently released as a paid product on Steam in October 2016 as an early access title. After three years of development, the software left its early access stage in November 2018. In August 2019, Wallpaper Engine was announced to be one of the release titles for Steam China. Despite not being a game, Wallpaper Engine is one of the most used apps of Steam, being located in Steam's Top 25 played games in July 2019 and Top 10 played games in November 2021. References External links Wallpaper Engine on Steam Windows software C++ software Windows multimedia software 2016 software
67312457
https://en.wikipedia.org/wiki/Andreas%20Stefik
Andreas Stefik
Andreas Stefik is an associate professor of computer science at the University of Nevada, Las Vegas and the creator of Quorum, a computer programming language created with features that improve access for people with visually impairments. Stefik conducts research in the areas of software engineering, accessibility, and computer science education. He is an advocate for increasing access to computer science in K–12 education. Education Stefik began his education pursuing a bachelor's degree in music at Central Washington University, but graduated with a Bachelor's in Mathematics as well as Music. Stefik went on to receive his Master's Degree and PhD in Computer Science from Washington State University Career and research As a computer science graduate student, Stefik became interested in the resources available for those who are blind or have low vision that wanted to pursue a degree in computer science. He found there was no language currently available accessible to the blind and visually impaired, and decided to create his own. The work began as a project called Sodbeans, and over the course of ten years he developed the language Quorum with his wife, which is also auditory and therefore more accessible to people with visual impairments. In 2016, Stefik received the White House Champion of Change award for Computer Science Education for his efforts. Stefik has also created a model for computer science education for blind or visually impaired students that as of 2016 has been deployed in almost 20 schools. Through Stefik's research and works, he has received many grants. Most notably, he received grants from the National Science Foundation to help build Quorum. Notable work An Empirical Investigation into Programming Language Syntax How do API documentation and static typing affect API usability? An empirical study on the impact of static typing on software maintainability Awards Java Innovation Award, Oracle Corporation, 2011 White House Champion of Change for Computer Science Education, 2016 Code.org Champions of Computer Science, 2018 References External links The Quorum Programming Language Why Aren’t Computer Programming Languages Designed Better? (Fast Company, January 3, 2012) American computer scientists Living people Year of birth missing (living people) University of Nevada, Las Vegas faculty Central Washington University alumni Washington State University alumni
43020568
https://en.wikipedia.org/wiki/Tandy%20Graphics%20Adapter
Tandy Graphics Adapter
Tandy Graphics Adapter (TGA, also Tandy graphics) is a computer display standard for the Tandy 1000 series of IBM PC compatibles, which has compatibility with the video subsystem of the IBM PCjr but became a standard in its own right. PCjr graphics The Tandy 1000 series began in 1984 as a clone of the IBM PCjr, offering support for existing PCjr software. As a result, its graphics subsystem is largely compatible. The PCjr, released in 1983, has a graphics subsystem built around IBM's Video Gate Array (not to be confused with the later Video Graphics Array) and an MC6845 CRTC and extends on the capabilities of the Color Graphics Adapter (CGA), increasing the number of colors in each screen mode. CGA's 2-color mode can be displayed with four colors, and its 4-color mode can be displayed with all 16 colors. Since the Tandy 1000 was much more successful than PCjr, their shared hardware capabilities became more associated with the Tandy brand than with IBM. While there is no specific name for the Tandy graphics subsystem (Tandy's documentation calls it the "Video System Logic"), common parlance referred to it as TGA. Where not otherwise stated, information in this article that describes the TGA also applies to the PCjr video subsystem. While EGA would eventually deliver a superset of TGA graphics on IBM compatibles, software written for TGA is not compatible with EGA cards. Hardware design TGA graphics are built into the motherboards of Tandy computers. The PCjr uses a custom monitor with a unique 18-pin plug, but an adapter (with the same DE-9 connector and pinout as IBM's CGA/EGA) can connect it to the IBM Color Display. The Tandy 1000 provides the DE-9 connector directly. The later Tandy 1000 SL and TL models offers an enhanced version of the TGA, still capable of displaying 16 colors but at an improved resolution of 640×200. Output capabilities Tandy Video I / PCjr Tandy 1000 systems before the Tandy 1000 SL, and the PCjr, have this type of video. It offers several CGA-compatible modes and enhanced modes. CGA compatible modes: 320×200 in 4 colors from a 16 color (RGBI) hardware palette. Pixel aspect ratio of 1:1.2. 640×200 in 2 colors from 16. Pixel aspect ratio of 1:2.4 40×25 with 8×8 pixel font text mode (effective resolution of 320×200) 80×25 with 8×8 pixel font text mode (effective resolution of 640×200) Both text modes could themselves be set to display in monochrome, or in 16 colors. In addition to the CGA modes, it offers: 160×200 with 16 colors (equivalent to the graphical quality of many contemporary 8-bit home computers and games consoles, using the same 16kb memory size and machine bandwidth as the original CGA modes, and analogous to/somewhat able to share graphics assets with CGA's "composite color" mode whilst remaining displayable on RGB monitors) 320×200 with 16 colors 640×200 with 4 colors (from 16) Some games detect the Tandy hardware and display enhanced graphics in Tandy mode even when their CGA display mode is selected, while others offer the option to select "Tandy" graphics. Tandy Video II or ETGA Tandy 1000 SL-series, TL-series, and RL-series models have this type of video. It offers the same modes as Tandy Video I, plus one more non-CGA mode: 640×200 with 16 colors Popularity With built-in joystick ports, 16-color graphics and multichannel sound, the Tandy 1000 was considered the best platform for IBM PC-compatible games before the VGA era, and the combination of its graphics and sound became a de facto standard, "Tandy compatible". 28 of 66 games that Computer Gaming World tested in 1989 supported Tandy graphics. Incompatibilities The PCjr video and Tandy 1000 graphics subsystems are not identical. One difference is in the size of the video memory aperture at address 0xB8000. While the PCjr video hardware can use up to 32 KB of RAM for the video buffer, it emulates the CGA precisely by making only 16 KB of this available at address 0xB8000. Like the true CGA, the 16 KB of RAM at 0xB8000 is aliased at address 0xBC000. The Tandy hardware, in contrast, makes the full 32 KB of selected video RAM available at 0xB8000. This difference causes some software written for Tandy graphics not to work correctly on a PCjr, displaying images in 320×200 16-color or 640×200 with periodic black horizontal lines: a "venetian-blinds" effect. It is possible that software for the PCjr that relies on the memory wrap-around at address 0xBC000 will not work correctly on a Tandy 1000. Technical details Shared RAM Unlike every other IBM-designed PC video standard, TGA uses some of the main system RAM as video RAM. The PCjr had 64 KB of built-in RAM on the mainboard, and an additional 64 KB can be installed via a special card that plugs into a dedicated slot on the PCjr mainboard. This 64 KB or 128 KB of base RAM is special in that it is shared with the PCjr video subsystem. TGA video modes use either 16 KB or 32 KB of RAM each. Text modes uses 16 KB divided into 4 or 8 pages, for 80×25 or 40×25 text formats respectively. In graphical modes, the base 128 KB of RAM is divided into eight 16 KB banks. The PCjr can use any bank for video generation, in a video mode that uses 16 KB. In a mode that uses 32 KB, it can use any even bank concatenated with the next higher odd bank. The PCjr also can independently map any 16 KB bank of base RAM to address 0xB8000 for CPU access, for CGA compatibility. Apart from address 0xB8000, the CPU can access any bank at any time via its native address in the first 128 KB of the address space. The first bank overlaps the interrupt vector table of the x86 CPU and the data area used by the BIOS, so it is generally not usable for graphics. Using system memory has advantages: It saves the cost of dedicated video RAM, and the dynamic RAM is refreshed by the 6845 CRT controller as long as the video is running, so there is no need for separate DRAM refresh circuitry. In the IBM PC XT upon which the PCjr is based, DRAM refresh is performed by one channel of the 8237 DMA controller, triggered by one channel of the 8253 programmable timer, while in the PCjr the 8237 is eliminated and the timer channel is repurposed (to work around a complication of other cost-cutting in the keyboard interface). Up to almost 128 KB of RAM can be used for video (if software is mostly in ROM—e.g. on PCjr cartridges—or in RAM above the first 128 KB), and the displayed video banks can be switched instantaneously to implement double-buffering (or triple-buffering, or up to 7-fold buffering in 16 KB video modes) for smooth full-screen animation, something the CGA cannot do. The Tandy 1000 computers do not incorporate the PCjr's cost-cutting measures (most of them have an 8237 DMA controller), but for compatibility with PCjr video, they use the same RAM-sharing scheme. Programmable palette When operating in the CGA video modes which use 1 or 2 bits per pixel, TGA allows remapping of the 2 or 4 palette entries to any of the 16 colors in the CGA gamut via programmable palette control registers. This allows software to use the CGA modes without being constrained to the three hardwired palettes of the actual CGA. The following improvements in color choice are available in the CGA graphics modes: 320×200 in 4 colors: The three foreground colors can be freely chosen, in addition to the background color which could already be set on the CGA 640×200 in 2 colors: The background color can be freely chosen, rather than always being black, in addition to the foreground color which could already be set on the CGA. The palette mapping logic is always active, even in text modes, so it is possible to cause certain text to change in appearance (appear, disappear, cycle colors, etc.) just by changing the palette, without making any changes to the character attribute bytes in RAM. The PCjr/TGA programmable palette was carried over to the IBM EGA, where it was extended to 6-bit entries for 64 colors, and so on for the expanded palette of VGA. See also Plantronics Colorplus, a graphic board with similar capabilities List of 8-bit computer hardware palettes List of defunct graphics chips and card companies References Computer display standards Graphics cards RadioShack Computer-related introductions in 1984
59732521
https://en.wikipedia.org/wiki/List%20of%20Dutch%20inventions%20and%20innovations
List of Dutch inventions and innovations
The Netherlands and its people have made numerous contributions to the world's civilization in art, science, technology and engineering, economics and finance, cartography and geography, exploration and navigation, law and jurisprudence, thought and philosophy, medicine and agriculture. The following list is composed of objects, (largely) unknown lands, breakthrough ideas/concepts, principles, phenomena, processes, methods, techniques, styles etc., that were discovered or invented (or pioneered) by people from the Netherlands and Dutch-speaking people from the former Southern Netherlands (Zuid-Nederlanders in Dutch). Until the fall of Antwerp (1585), the Dutch and Flemish were generally seen as one people. Inventions and innovations Arts and architecture Movements and styles De Stijl (Neo-Plasticism) (1917) The De Stijl school proposed simplicity and abstraction, both in architecture and painting, by using only straight horizontal and vertical lines and rectangular forms. Furthermore, their formal vocabulary was limited to the primary colours, red, yellow, and blue and the three primary values, black, white and grey. De Stijl's principal members were painters Theo van Doesburg (1883–1931), Piet Mondrian (1872–1944), Vilmos Huszár (1884–1960), and Bart van der Leck (1876–1958) and architects Gerrit Rietveld (1888–1964), Robert van 't Hoff (1888–1979) and J.J.P. Oud (1890–1963). Architecture Brabantine Gothic architecture (14th century) Brabantine Gothic, occasionally called Brabantian Gothic, is a significant variant of Gothic architecture that is typical for the Low Countries. It surfaced in the first half of the 14th century at Saint Rumbold's Cathedral in the City of Mechelen. The Brabantine Gothic style originated with the advent of the Duchy of Brabant and spread across the Burgundian Netherlands. Netherlandish gabled architecture (15th–17th centuries) The Dutch gable was a notable feature of the Dutch-Flemish Renaissance architecture (or Northern Mannerist architecture) that spread to northern Europe from the Low Countries, arriving in Britain during the latter part of the 16th century. Notable castles/buildings including Frederiksborg Castle, Rosenborg Castle, Kronborg Castle, Børsen, Riga's House of the Blackheads and Gdańsk's Green Gate were built in Dutch-Flemish Renaissance style with sweeping gables, sandstone decorations and copper-covered roofs. Later Dutch gables with flowing curves became absorbed into Baroque architecture. Examples of Dutch-gabled buildings can be found in historic cities across Europe such as Potsdam (Dutch Quarter), Friedrichstadt, Gdańsk and Gothenburg. The style spread beyond Europe, for example Barbados is well known for Dutch gables on its historic buildings. Dutch settlers in South Africa brought with them building styles from the Netherlands: Dutch gables, then adjusted to the Western Cape region where the style became known as Cape Dutch architecture. In the Americas and Northern Europe, the West End Collegiate Church (New York City, 1892), the Chicago Varnish Company Building (Chicago, 1895), Pont Street Dutch-style buildings (London, 1800s), Helsingør Station (Helsingør, 1891), and Gdańsk University of Technology's Main Building (Gdańsk, 1904) are typical examples of the Dutch Renaissance Revival (Neo-Renaissance) architecture in the late 19th century. Netherlandish Mannerist architecture (Antwerp Mannerism) (16th century) Antwerp Mannerism is the name given to the style of a largely anonymous group of painters from Antwerp in the beginning of the 16th century. The style bore no direct relation to Renaissance or Italian Mannerism, but the name suggests a peculiarity that was a reaction to the classic style of the early Netherlandish painting. Antwerp Mannerism may also be used to describe the style of architecture, which is loosely Mannerist, developed in Antwerp by about 1540, which was then influential all over Northern Europe. The Green Gate (Brama Zielona) in Gdańsk, Poland, is a building which is inspired by the Antwerp City Hall. It was built between 1568 and 1571 by Regnier van Amsterdam and Hans Kramer to serve as the formal residence of the Polish monarchs when visiting Gdańsk. Cape Dutch architecture (1650s) Cape Dutch architecture is an architectural style found in the Western Cape of South Africa. The style was prominent in the early days (17th century) of the Cape Colony, and the name derives from the fact that the initial settlers of the Cape were primarily Dutch. The style has roots in medieval Netherlands, Germany, France and Indonesia. Houses in this style have a distinctive and recognisable design, with a prominent feature being the grand, ornately rounded gables, reminiscent of features in townhouses of Amsterdam built in the Dutch style. Amsterdam School (Dutch Expressionist architecture) (1910s) The Amsterdam School (Dutch: Amsterdamse School) flourished from 1910 through about 1930 in the Netherlands. The Amsterdam School movement is part of international Expressionist architecture, sometimes linked to German Brick Expressionism. Rietveld Schröder House (De Stijl architecture) (1924) The Rietveld Schröder House or Schröder House (Rietveld Schröderhuis in Dutch) in Utrecht was built in 1924 by Dutch architect Gerrit Rietveld. It became a listed monument in 1976 and a UNESCO World Heritage Site in 2000. The Rietveld Schröder House constitutes both inside and outside a radical break with tradition, offering little distinction between interior and exterior space. The rectilinear lines and planes flow from outside to inside, with the same colour palette and surfaces. Inside is a dynamic, changeable open zone rather than a static accumulation of rooms. The house is one of the best known examples of De Stijl architecture and arguably the only true De Stijl building. Van Nelle Factory (1925–1931) The Van Nelle factory was built between 1925 and 1931. Its most striking feature is its huge glass façades. The factory was designed on the premise that a modern, transparent and healthy working environment in green surroundings would be good both for production and for workers' welfare. The factory had a huge impact on the development of modern architecture in Europe and elsewhere. The Van Nelle Factory is a Dutch national monument (Rijksmonument) and since 2014 has the status of UNESCO World Heritage Site. The Justification of Outstanding Universal Value was presented in 2013 to the UNESCO World Heritage Committee. Super Dutch (1990–present) An architectural movement started by a generation of new architects during the 1990, among this generation of architects were OMA, MVRDV, UNStudio, Mecanoo, Meyer en Van Schooten and many more. They started with buildings, which became internationally known for their new and refreshing style. After which Super Dutch Architecture spread out across the globe. Furniture Dutch door (17th century) The Dutch door (also known as stable door or half door) is a type of door divided horizontally in such a fashion that the bottom half may remain shut while the top half opens. The initial purpose of this door was to keep animals out of farmhouses, while keeping children inside, yet allowing light and air to filter through the open top. This type of door was common in the Netherlands in the seventeenth century and appears in Dutch paintings of the period. They were commonly found in Dutch areas of New York and New Jersey (before the American Revolution) and in South Africa. Red and Blue Chair (1917) The Red and Blue Chair was designed in 1917 by Gerrit Rietveld. It represents one of the first explorations by the De Stijl art movement in three dimensions. It features several Rietveld joints. Zig-Zag Chair (1934) The Zig-Zag Chair was designed by Rietveld in 1934. It is a minimalist design without legs, made by 4 flat wooden tiles that are merged in a Z-shape using Dovetail joints. It was designed for the Rietveld Schröder House in Utrecht. Visual arts Foundations of modern oil painting (15th century) Although oil paint was first used for Buddhist paintings by Indian and Chinese painters sometime between the fifth and tenth centuries, it did not gain notoriety until the 15th century. Its practice may have migrated westward during the Middle Ages. Oil paint eventually became the principal medium used for creating artworks as its advantages became widely known. The transition began with Early Netherlandish painting in northern Europe, and by the height of the Renaissance oil painting techniques had almost completely replaced tempera paints in the majority of Europe. Early Netherlandish painting (Jan van Eyck in particular) in the 15th century was the first to make oil the default painting medium, and to explore the use of layers and glazes, followed by the rest of Northern Europe, and only then Italy. Glaze (painting technique) (15th century) Glazing is a technique employed by painters since the invention of modern oil painting. Early Netherlandish painters in the 15th century were the first to make oil the usual painting medium, and explore the use of layers and glazes, followed by the rest of Northern Europe, and only then Italy. Proto-Realism (15th–17th centuries) Two aspects of realism were rooted in at least two centuries of Dutch tradition: conspicuous textural imitation and a penchant for ordinary and exaggeratedly comic scenes. Two hundred years before the rise of literary realism, Dutch painters had already made an art of the everyday – pictures that served as a compelling model for the later novelists. By the mid-1800s, 17th-century Dutch painting figured virtually everywhere in the British and French fiction we esteem today as the vanguard of realism. Proto-Surrealism (1470s–1510s) Hieronymus Bosch is considered one of the prime examples of Pre-Surrealism. The surrealists relied most on his insights. In the 20th century, Bosch's paintings (e.g. The Garden of Earthly Delights, The Haywain, The Temptation of St. Anthony and The Seven Deadly Sins and the Four Last Things) were cited by the Surrealists as precursors to their own visions. Modern still-life painting (16th–17th century) Still-life painting as an independent genre or specialty first flourished in the Netherlands in the last quarter of the 16th century, and the English term derives from stilleven: still life, which is a calque, while Romance languages (as well as Greek, Polish, Russian and Turkish) tend to use terms meaning dead nature. Naturalistic landscape painting (16th–17th century) The term "landscape" derives from the Dutch word landschap (and the German Landschaft), which originally meant "region, tract of land" but acquired the artistic connotation, "a picture depicting scenery on land" in the early 16th century. After the fall of the Roman Empire, the tradition of depicting pure landscapes declined and the landscape was seen only as a setting for religious and figural scenes. This tradition continued until the 16th century when artists began to view the landscape as a subject in its own right. The Dutch Golden Age painting of the 17th century saw the dramatic growth of landscape painting, in which many artists specialized, and the development of extremely subtle realist techniques for depicting light and weather. Genre painting (15th century) The Flemish Renaissance painter Pieter Brueghel the Elder chose peasants and their activities as the subject of many paintings. Genre painting flourished in Northern Europe in his wake. Adriaen van Ostade, David Teniers, Aelbert Cuyp, Jan Steen, Johannes Vermeer and Pieter de Hooch were among many painters specializing in genre subjects in the Netherlands during the 17th century. The generally small scale of these artists' paintings was appropriate for their display in the homes of middle class purchasers. Marine painting (17th century) Marine painting began in keeping with medieval Christian art tradition. Such works portrayed the sea only from a bird's eye view, and everything, even the waves, was organized and symmetrical. The viewpoint, symmetry and overall order of these early paintings underlined the organization of the heavenly cosmos from which the earth was viewed. Later Dutch artists such as Hendrick Cornelisz Vroom, Cornelius Claesz, Abraham Storck, Jan Porcellis, Simon de Vlieger, Willem van de Velde the Elder, Willem van de Velde the Younger and Ludolf Bakhuizen developed new methods for painting, often from a horizontal point of view, with a lower horizon and more focus on realism than symmetry. Vanitas (17th century) The term vanitas is most often associated with still life paintings that were popular in seventeenth-century Dutch art, produced by the artists such as Pieter Claesz. Common vanitas symbols included skulls (a reminder of the certainty of death); rotten fruit (decay); bubbles, (brevity of life and suddenness of death); smoke, watches, and hourglasses, (the brevity of life); and musical instruments (the brevity and ephemeral nature of life). Fruit, flowers and butterflies can be interpreted in the same way, while a peeled lemon, as well as the typical accompanying seafood was, like life, visually attractive but with a bitter flavor. Civil group portraiture (17th century) Group portraits were produced in great numbers during the Baroque period, particularly in the Netherlands. Unlike in the rest of Europe, Dutch artists received no commissions from the Calvinist Church which had forbidden such images or from the aristocracy which was virtually non-existent. Instead, commissions came from civic and businesses associations. Dutch painter Frans Hals used fluid brush strokes of vivid color to enliven his group portraits, including those of the civil guard to which he belonged. Rembrandt benefitted greatly from such commissions and from the general appreciation of art by bourgeois clients, who supported portraiture as well as still-life and landscape painting. Notably, the world's first significant art and dealer markets flourished in Holland at that time. Tronie (17th century) In the 17th century, Dutch painters (especially Frans Hals, Rembrandt, Jan Lievens and Johannes Vermeer) began to create uncommissioned paintings called tronies that focused on the features and/or expressions of people who were not intended to be identifiable. They were conceived more for art's sake than to satisfy conventions. The tronie was a distinctive type of painting, combining elements of the portrait, history, and genre painting. This was usually a half-length of a single figure which concentrated on capturing an unusual mood or expression. The actual identity of the model was not supposed to be important, but they might represent a historical figure and be in exotic or historic costume. In contrast to portraits, "tronies" were painted for the open market. They differ from figurative paintings and religious figures in that they are not restricted to a moral or narrative context. It is, rather, much more an exploration of the spectrum of human physiognomy and expression and the reflection of conceptions of character that are intrinsic to psychology's pre-history. Rembrandt lighting (17th century) Rembrandt lighting is a lighting technique that is used in studio portrait photography. It can be achieved using one light and a reflector, or two lights, and is popular because it is capable of producing images which appear both natural and compelling with a minimum of equipment. Rembrandt lighting is characterized by an illuminated triangle under the eye of the subject, on the less illuminated side of the face. It is named for the Dutch painter Rembrandt, who often used this type of lighting in his portrait paintings. Mezzotint (1642) The first known mezzotint was done in Amsterdam in 1642 by Utrecht-born German artist Ludwig von Siegen. He lived in Amsterdam from 1641 to about 1644, when he was supposedly influenced by Rembrandt. Aquatint (1650s) The painter and printmaker Jan van de Velde is often credited to be the inventor of the aquatint technique, in Amsterdam around 1650. Pronkstilleven (1650s) Pronkstilleven (pronk still life or ostentatious still life) is a type of banquet piece whose distinguishing feature is a quality of ostentation and splendor. These still lifes usually depict one or more especially precious objects. Although the term is a post-17th century invention, this type is characteristic of the second half of the seventeenth century. It was developed in the 1640s in Antwerp from where it spread quickly to the Dutch Republic. Flemish artists such as Frans Snyders and Adriaen van Utrecht started to paint still lifes that emphasized abundance by depicting a diversity of objects, fruits, flowers and dead game, often together with living people and animals. The style was soon adopted by artists from the Dutch Republic. A leading Dutch representative was Jan Davidsz. de Heem, who spent a long period of his active career in Antwerp and was one of the founders of the style in Holland. Other leading representatives in the Dutch Republic were Abraham van Beyeren, Willem Claeszoon Heda and Willem Kalf. Proto-Expressionism (1880s) Vincent van Gogh's work is most often associated with Post-Impressionism, but his innovative style had a vast influence on 20th-century art and established what would later be known as Expressionism, also greatly influencing fauvism and early abstractionism. His impact on German and Austrian Expressionists was especially profound. "Van Gogh was father to us all," the German Expressionist painter Max Pechstein proclaimed in 1901, when Van Gogh's vibrant oils were first shown in Germany and triggered the artistic reformation, a decade after his suicide in obscurity in France. In his final letter to Theo, Van Gogh stated that, as he had no children, he viewed his paintings as his progeny. Reflecting on this, the British art historian Simon Schama concluded that he "did have a child of course, Expressionism, and many, many heirs." M. C. Escher's graphic arts (1920s–1960s) Dutch graphic artist Maurits Cornelis Escher, usually referred to as M. C. Escher, is known for his often mathematically inspired woodcuts, lithographs, and mezzotints. These feature impossible constructions, explorations of infinity, architecture and tessellations. His special way of thinking and rich graphic work has had a continuous influence in science and art, as well as permeating popular culture. His ideas have been used in fields as diverse as psychology, philosophy, logic, crystallography and topology. His art is based on mathematical principles like tessellations, spherical geometry, the Möbius strip, unusual perspectives, visual paradoxes and illusions, different kinds of symmetries and impossible objects. Gödel, Escher, Bach by Douglas Hofstadter discusses the ideas of self-reference and strange loops, drawing on a wide range of artistic and scientific work, including Escher's art and the music of J. S. Bach, to illustrate ideas behind Gödel's incompleteness theorems. Miffy (Nijntje) (1955) Miffy (Nijntje) is a small female rabbit in a series of picture books drawn and written by Dutch artist Dick Bruna. Music Franco-Flemish School (Netherlandish School) (15th–16th century) In music, the Franco-Flemish School or more precisely the Netherlandish school refers to the style of polyphonic vocal music composition in the Burgundian Netherlands in the 15th and early 16th centuries, and to the composers who wrote it. Venetian School (Venetian polychoral style) (16th century) The Venetian School of polychoral music was founded by the Netherlandish composer Adrian Willaert. Hardcore (electronic dance music genre) (1990s) Hardcore or hardcore techno is a subgenre of electronic dance music originating in Europe from the emergent raves in the 1990s. It was initially designed at Rotterdam in Netherlands, derived from techno. Hardstyle (electronic dance music genre) (1990s–2000s) Hardstyle is an electronic dance genre mixing influences from hardtechno and hardcore. Hardstyle was influenced by gabber. Hardstyle has its origins in the Netherlands where artists like DJ Zany, Lady Dana, DJ Isaac, DJ Pavo, DJ Luna and The Prophet, who produced hardcore, started experimenting while playing their hardcore records. Agriculture Holstein Friesian cattle (2nd century BC) Holsteins or Holstein-Friesians are a breed of cattle known today as the world's highest-production dairy animals. Originating in Europe, Holstein-Friesians were bred in the two northern provinces of North Holland and Friesland, and Schleswig-Holstein in what became Germany. The animals were the regional cattle of the Frisians and the Saxons. The origins of the breed can be traced to the black cows and white cows of the Batavians and Frisians – migrant tribes who settled the coastal Rhine region more than two thousand years ago. Brussels sprout (13th century) Forerunners to modern Brussels sprouts were likely cultivated in ancient Rome. Brussels sprouts as we now know them were grown possibly as early as the 13th century in the Low Countries (may have originated in Brussels). The first written reference dates to 1587. During the 16th century, they enjoyed a popularity in the Southern Netherlands that eventually spread throughout the cooler parts of Northern Europe. Orange-coloured carrot (16th century) Through history, carrots weren't always orange. They were black, purple, white, brown, red and yellow. Probably orange too, but this was not the dominant colour. Orange-coloured carrots appeared in the Netherlands in the 16th century. Dutch farmers in Hoorn bred the color. They succeeded by cross-breeding pale yellow with red carrots. It is more likely that Dutch horticulturists actually found an orange rooted mutant variety and then worked on its development through selective breeding to make the plant consistent. Through successive hybridisation the orange colour intensified. This was developed to become the dominant species across the world, a sweet orange. Belle de Boskoop (apple) (1856) Belle de Boskoop is an apple cultivar which, as its name suggests, originated in Boskoop, where it began as a chance seedling in 1856. There are many variants: Boskoop red, yellow or green. This rustic apple is firm, tart and fragrant. Greenish-gray tinged with red, the apple stands up well to cooking. Generally Boskoop varieties are very high in acid content and can contain more than four times the vitamin C of 'Granny Smith' or 'Golden Delicious'. Karmijn de Sonnaville (apple) (1949) Karmijn de Sonnaville is a variety of apple bred by Piet de Sonnaville, working in Wageningen in 1949. It is a cross of Cox's Orange Pippin and Jonathan, and was first grown commercially beginning in 1971. It is high both in sugars (including some sucrose) and acidity. It is a triploid, and hence needs good pollination, and can be difficult to grow. It also suffers from fruit russet, which can be severe. In Manhart's book, "apples for the 21st century", Karmijn de Sonnaville is tipped as a possible success for the future. Karmijn de Sonnaville is not widely grown in large quantities, but in Ireland, at The Apple Farm, it is grown for fresh sale and juice-making, for which the variety is well suited. Elstar (apple) (1950s) Elstar apple is an apple cultivar that was first developed in the Netherlands in the 1950s by crossing Golden Delicious and Ingrid Marie apples. It quickly became popular, especially in Europe and was first introduced to America in 1972. It remains popular in Continental Europe. The Elstar is a medium-sized apple whose skin is mostly red with yellow showing. The flesh is white, and has a soft, crispy texture. It may be used for cooking and is especially good for making apple sauce. In general, however, it is used in desserts due to its sweet flavour. Groasis Waterboxx (2010) The Groasis Waterboxx is a device designed to help grow trees in dry areas. It was developed by former flower exporter Pieter Hoff, and won Popular Science's "Green Tech Best of What's New" Innovation of the year award for 2010. Cartography and geography Method for determining longitude using a clock (1530) The Dutch-Frisian geographer Gemma Frisius was the first to propose the use of a chronometer to determine longitude in 1530. In his book On the Principles of Astronomy and Cosmography (1530), Frisius explains for the first time how to use a very accurate clock to determine longitude. The problem was that in Frisius’ day, no clock was sufficiently precise to use his method. In 1761, the British clock-builder John Harrison constructed the first marine chronometer, which allowed the method developed by Frisius. Triangulation and the modern systematic use of triangulation networks (1533 and 1615)' Triangulation had first emerged as a map-making method in the mid sixteenth century when the Dutch-Frisian mathematician Gemma Frisius set out the idea in his Libellus de locorum describendorum ratione (Booklet concerning a way of describing places). Dutch cartographer Jacob van Deventer was among the first to make systematic use of triangulation, the technique whose theory was described by Gemma Frisius in his 1533 book. The modern systematic use of triangulation networks stems from the work of the Dutch mathematician Willebrord Snell (born Willebrord Snel van Royen), who in 1615 surveyed the distance from Alkmaar to Bergen op Zoom, approximately 70 miles (110 kilometres), using a chain of quadrangles containing 33 triangles in all – a feat celebrated in the title of his book Eratosthenes Batavus (The Dutch Eratosthenes), published in 1617. Mercator projection (1569) The Mercator projection is a cylindrical map projection presented by the Flemish geographer and cartographer Gerardus Mercator in 1569. It became the standard map projection for nautical purposes because of its ability to represent lines of constant course, known as rhumb lines or loxodromes, as straight segments which conserve the angles with the meridians. First modern world atlas (1570) Flemish geographer and cartographer Abraham Ortelius generally recognized as the creator of the world's first modern atlas, the Theatrum Orbis Terrarum (Theatre of the World). Ortelius's Theatrum Orbis Terrarum is considered the first true atlas in the modern sense: a collection of uniform map sheets and sustaining text bound to form a book for which copper printing plates were specifically engraved. It is sometimes referred to as the summary of sixteenth-century cartography. First printed atlas of nautical charts (1584) The first printed atlas of nautical charts (De Spieghel der Zeevaerdt or The Mirror of Navigation / The Mariner's Mirror) was produced by Lucas Janszoon Waghenaer in Leiden. This atlas was the first attempt to systematically codify nautical maps. This chart-book combined an atlas of nautical charts and sailing directions with instructions for navigation on the western and north-western coastal waters of Europe. It was the first of its kind in the history of maritime cartography, and was an immediate success. The English translation of Waghenaer's work was published in 1588 and became so popular that any volume of sea charts soon became known as a "waggoner", the Anglicized form of Waghenaer's surname. Concept of atlas (1595) Gerardus Mercator was the first to coin the word atlas to describe a bound collection of maps through his own collection entitled "Atlas sive Cosmographicae meditationes de fabrica mvndi et fabricati figvra". He coined this name after the Greek god who held The Sky up, later changed to holding up The Earth. First systematic charting of the far southern skies (southern constellations) (1595–97) The Dutch Republic's explorers and cartographers like Pieter Dirkszoon Keyser, Frederick de Houtman, Petrus Plancius and Jodocus Hondius were the pioneers in first systematic charting/mapping of largely unknown southern hemisphere skies in the late 16th century. The constellations around the South Pole were not observable from north of the equator, by Babylonians, Greeks, Chinese or Arabs. The modern constellations in this region were defined during the Age of Exploration, notably by Dutch navigators Pieter Dirkszoon Keyser and Frederick de Houtman at the end of sixteenth century. These twelve Dutch-created southern constellations represented flora and fauna of the East Indies and Madagascar. They were depicted by Johann Bayer in his star atlas Uranometria of 1603. Several more were created by Nicolas Louis de Lacaille in his star catalogue, published in 1756. By the end of the Ming dynasty, Xu Guangqi introduced 23 asterisms of the southern sky based on the knowledge of western star charts. These asterisms have since been incorporated into the traditional Chinese star maps. Among the IAU's 88 modern constellations, there are 15 Dutch-created constellations (including Apus, Camelopardalis, Chamaeleon, Columba, Dorado, Grus, Hydrus, Indus, Monoceros, Musca, Pavo, Phoenix, Triangulum Australe, Tucana and Volans). Continental drift hypothesis (1596) The speculation that continents might have 'drifted' was first put forward by Abraham Ortelius in 1596. The concept was independently and more fully developed by Alfred Wegener in 1912. Because Wegener's publications were widely available in German and English and because he adduced geological support for the idea, he is credited by most geologists as the first to recognize the possibility of continental drift. During the 1960s geophysical and geological evidence for seafloor spreading at mid-oceanic ridges established continental drift as the standard theory or continental origin and an ongoing global mechanism. Chemicals and materials Bow dye (1630) While making a coloured liquid for a thermometer, Cornelis Drebbel dropped a flask of Aqua regia on a tin window sill, and discovered that stannous chloride makes the color of carmine much brighter and more durable. Though Drebbel himself never made much from his work, his daughters Anna and Catharina and his sons-in-law Abraham and Johannes Sibertus Kuffler set up a successful dye works. One was set up in 1643 in Bow, London, and the resulting color was called bow dye. Dyneema (1979) Dutch chemical company DSM invented and patented the Dyneema in 1979. Dyneema fibres have been in commercial production since 1990 at their plant at Heerlen. These fibers are manufactured by means of a gel-spinning process that combines extreme strength with incredible softness. Dyneema fibres, based on ultra-high-molecular-weight polyethylene (UHMWPE), is used in many applications in markets such as life protection, shipping, fishing, offshore, sailing, medical and textiles. Communication and multimedia Compact cassette (1962) In 1962 Philips invented the compact audio cassette medium for audio storage, introducing it in Europe in August 1963 (at the Berlin Radio Show) and in the United States (under the Norelco brand) in November 1964, with the trademark name Compact Cassette. Laserdisc (1969) Laserdisc technology, using a transparent disc, was invented by David Paul Gregg in 1958 (and patented in 1961 and 1990). By 1969, Philips developed a videodisc in reflective mode, which has great advantages over the transparent mode. MCA and Philips decided to join forces. They first publicly demonstrated the videodisc in 1972. Laserdisc entered the market in Atlanta, on 15 December 1978, two years after the VHS VCR and four years before the CD, which is based on Laserdisc technology. Philips produced the players and MCA made the discs. Compact disc (1979) The compact disc was jointly developed by Philips (Joop Sinjou) and Sony (Toshitada Doi). In the early 1970s, Philips' researchers started experiments with "audio-only" optical discs, and at the end of the 1970s, Philips, Sony, and other companies presented prototypes of digital audio discs. Bluetooth (1990s) Bluetooth, a low-energy, peer-to-peer wireless technology was originally developed by Dutch electrical engineer Jaap Haartsen and Swedish engineer Sven Mattisson in the 1990s, working at Ericsson in Lund, Sweden. It became a global standard of short distance wireless connection. Wi-fi (1990s) In 1991, NCR Corporation/AT&T Corporation invented the precursor to 802.11 in Nieuwegein. Dutch electrical engineer Vic Hayes chaired IEEE 802.11 committee for 10 years, which was set up in 1990 to establish a wireless networking standard. He has been called the father of Wi-Fi (the brand name for products using IEEE 802.11 standards) for his work on IEEE 802.11 (802.11a & 802.11b) standard in 1997. DVD (1995) The DVD optical disc storage format was invented and developed by Philips and Sony in 1995. Ambilight (2002) Ambilight, short for "ambient lighting", is a lighting system for televisions developed by Philips in 2002. Blu-ray (2006) Philips and Sony in 1997 and 2006 respectively, launched the Blu-ray video recording/playback standard. Computer science and information technology Dijkstra's algorithm (1956) Dijkstra's algorithm, conceived by Dutch computer scientist Edsger Dijkstra in 1956 and published in 1959, is a graph search algorithm that solves the single-source shortest path problem for a graph with non-negative edge path costs, producing a shortest path tree. Dijkstra's algorithm is so powerful that it not only finds the shortest path from a chosen source to a given destination, it finds all of the shortest paths from the source to all destinations. This algorithm is often used in routing and as a subroutine in other graph algorithms. Dijkstra's algorithm is considered as one of the most popular algorithms in computer science. It is also widely used in the fields of artificial intelligence, operational research/operations research, network routing, network analysis, and transportation engineering. Foundations of distributed computing (1960s) Through his fundamental contributions Edsger Dijkstra helped shape the field of computer science. His groundbreaking contributions ranged from the engineering side of computer science to the theoretical one and covered several areas including compiler construction, operating systems, distributed systems, sequential and concurrent programming, software engineering, and graph algorithms. Many of his papers, often just a few pages long, are the source of whole new research areas. Several concepts that are now completely standard in computer science were first identified by Dijkstra and/or bear names coined by him. Edsger Dijkstra's foundational work on concurrency, semaphores, mutual exclusion, deadlock, finding shortest paths in graphs, fault-tolerance, self-stabilization, among many other contributions comprises many of the pillars upon which the field of distributed computing is built. The Edsger W. Dijkstra Prize in Distributed Computing (sponsored jointly by the ACM Symposium on Principles of Distributed Computing and the EATCS International Symposium on Distributed Computing) is given for outstanding papers on the principles of distributed computing, whose significance and impact on the theory and/or practice of distributed computing has been evident for at least a decade. Foundations of concurrent programming (1960s) The academic study of concurrent programming (concurrent algorithms in particular) started in the 1960s, with Edsger Dijkstra (1965) credited with being the first paper in this field, identifying and solving mutual exclusion. A pioneer in the field of concurrent computing, Per Brinch Hansen considers Dijkstra's Cooperating Sequential Processes (1965) to be the first classic paper in concurrent programming. As Brinch Hansen notes: ‘Here Dijkstra lays the conceptual foundation for abstract concurrent programming.’ Foundations of software engineering (1960s) Computer programming in the 1950s to 1960s was not recognized as an academic discipline and unlike physics there were no theoretical concepts or coding systems. Dijkstra was one of the moving forces behind the acceptance of computer programming as a scientific discipline. In 1968, computer programming was in a state of crisis. Dijkstra was one of a small group of academics and industrial programmers who advocated a new programming style to improve the quality of programs. Dijkstra coined the phrase "structured programming" and during the 1970s this became the new programming orthodoxy. As Bertrand Meyer remarked: "The revolution in views of programming started by Dijkstra's iconoclasm led to a movement known as structured programming, which advocated a systematic, rational approach to program construction. Structured programming is the basis for all that has been done since in programming methodology, including object-oriented programming." Dijkstra's ideas about structured programming helped lay the foundations for the birth and development of the professional discipline of software engineering, enabling programmers to organize and manage increasingly complex software projects. Shunting-yard algorithm (1960) In computer science, the shunting-yard algorithm is a method for parsing mathematical expressions specified in infix notation. It can be used to produce output in Reverse Polish notation (RPN) or as an abstract syntax tree (AST). The algorithm was invented by Edsger Dijkstra and named the "shunting yard" algorithm because its operation resembles that of a railroad shunting yard. Dijkstra first described the Shunting Yard Algorithm in the Mathematisch Centrum report. Schoonschip (early computer algebra system) (1963) In 1963/64, during an extended stay at SLAC, Dutch theoretical physicist Martinus Veltman designed the computer program Schoonschip for symbolic manipulation of mathematical equations, which is now considered the very first computer algebra system. Mutual exclusion (mutex) (1965) In computer science, mutual exclusion refers to the requirement of ensuring that no two concurrent processes are in their critical section at the same time; it is a basic requirement in concurrency control, to prevent race conditions. The requirement of mutual exclusion was first identified and solved by Edsger W. Dijkstra in his seminal 1965 paper titled Solution of a problem in concurrent programming control, and is credited as the first topic in the study of concurrent algorithms. Semaphore (programming) (1965) The semaphore concept was invented by Dijkstra in 1965 and the concept has found widespread use in a variety of operating systems. Sleeping barber problem (1965) In computer science, the sleeping barber problem is a classic inter-process communication and synchronization problem between multiple operating system processes. The problem is analogous to that of keeping a barber working when there are customers, resting when there are none and doing so in an orderly manner. The sleeping barber problem was introduced by Edsger Dijkstra in 1965. Banker's algorithm (deadlock prevention algorithm) (1965) The Banker's algorithm is a resource allocation and deadlock avoidance algorithm developed by Edsger Dijkstra that tests for safety by simulating the allocation of predetermined maximum possible amounts of all resources, and then makes an "s-state" check to test for possible deadlock conditions for all other pending activities, before deciding whether allocation should be allowed to continue. The algorithm was developed in the design process for the THE multiprogramming system and originally described (in Dutch) in EWD108. The name is by analogy with the way that bankers account for liquidity constraints. Dining philosophers problem (1965) In computer science, the dining philosophers problem is an example problem often used in concurrent algorithm design to illustrate synchronization issues and techniques for resolving them. It was originally formulated in 1965 by Edsger Dijkstra as a student exam exercise, presented in terms of computers competing for access to tape drive peripherals. Soon after, Tony Hoare gave the problem its present formulation. Dekker's algorithm (1965) Dekker's algorithm is the first known correct solution to the mutual exclusion problem in concurrent programming. Dijkstra attributed the solution to Dutch mathematician Theodorus Dekker in his manuscript on cooperating sequential processes. It allows two threads to share a single-use resource without conflict, using only shared memory for communication. It is also the first published software-only, two-process mutual exclusion algorithm. THE multiprogramming system (1968) The THE multiprogramming system was a computer operating system designed by a team led by Edsger W. Dijkstra, described in monographs in 1965–66 and published in 1968. Van Wijngaarden grammar (1968) Van Wijngaarden grammar (also vW-grammar or W-grammar) is a two-level grammar that provides a technique to define potentially infinite context-free grammars in a finite number of rules. The formalism was invented by Adriaan van Wijngaarden to rigorously define some syntactic restrictions that previously had to be formulated in natural language, despite their formal content. Typical applications are the treatment of gender and number in natural language syntax and the well-definedness of identifiers in programming languages. The technique was used and developed in the definition of the programming language ALGOL 68. It is an example of the larger class of affix grammars. Structured programming (1968) In 1968, computer programming was in a state of crisis. Dijkstra was one of a small group of academics and industrial programmers who advocated a new programming style to improve the quality of programs. Dijkstra coined the phrase "structured programming" and during the 1970s this became the new programming orthodoxy. Structured programming is often regarded as "goto-less programming". But as Bertrand Meyer notes, "As the first book on the topic [Structured Programming by Dijkstra, Dahl, and Hoare] shows, structured programming is about much more than control structures and the goto. Its principal message is that programming should be considered a scientific discipline based on mathematical rigor." As a programming paradigm, structured programming – especially in the 1970s and 1980s – significantly influenced the birth of many modern programming languages such as Pascal, C, Modula-2, and Ada. The Fortran 77 version which incorporates the concepts of structured programming, was released in 1978. The C++ language was a considerably extended and enhanced version of the popular structured programming language C (see also: list of C-based programming languages). Since C++ was developed from a more traditional structured language, it is a 'hybrid language', rather than a pure object-oriented programming language. EPROM (1971) An EPROM or erasable programmable read only memory, is a type of memory chip that retains its data when its power supply is switched off. Development of the EPROM memory cell started with investigation of faulty integrated circuits where the gate connections of transistors had broken. Stored charge on these isolated gates changed their properties. The EPROM was invented by the Amsterdam-born Israeli electrical engineer Dov Frohman in 1971, who was awarded US patent 3660819 in 1972. Self-stabilization (1974) Self-stabilization is a concept of fault-tolerance in distributed computing. A distributed system that is self-stabilizing will end up in a correct state no matter what state it is initialized with. That correct state is reached after a finite number of execution steps. Many years after the seminal paper of Edsger Dijkstra in 1974, this concept remains important as it presents an important foundation for self-managing computer systems and fault-tolerant systems. Self-stabilization became its own area of study in distributed systems research, and Dijkstra set the stage for the next generation of computer scientists such as Leslie Lamport, Nancy Lynch, and Shlomi Dolev. As a result, Dijkstra's paper received the 2002 ACM PODC Influential-Paper Award (later renamed as Dijkstra Prize or Edsger W. Dijkstra Prize in Distributed Computing since 2003). Predicate transformer semantics (1975) Predicate transformer semantics were introduced by Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". Guarded Command Language (1975) The Guarded Command Language (GCL) is a language defined by Edsger Dijkstra for predicate transformer semantics. It combines programming concepts in a compact way, before the program is written in some practical programming language. Van Emde Boas tree (VEB tree) (1975) A Van Emde Boas tree (or Van Emde Boas priority queue, also known as a vEB tree, is a tree data structure which implements an associative array with m-bit integer keys. The vEB tree was invented by a team led by Dutch computer scientist Peter van Emde Boas in 1975. ABC (programming language) (1980s) ABC is an imperative general-purpose programming language and programming environment developed at CWI, Netherlands by Leo Geurts, Lambert Meertens, and Steven Pemberton. It is interactive, structured, high-level, and intended to be used instead of BASIC, Pascal, or AWK. It is not meant to be a systems-programming language but is intended for teaching or prototyping. The language had a major influence on the design of the Python programming language (as a counterexample); Guido van Rossum, who developed Python, previously worked for several years on the ABC system in the early 1980s. Dijkstra-Scholten algorithm (1980) The Dijkstra–Scholten algorithm (named after Edsger W. Dijkstra and Carel S. Scholten) is an algorithm for detecting termination in a distributed system. The algorithm was proposed by Dijkstra and Scholten in 1980. Smoothsort (1981) Smoothsort is a comparison-based sorting algorithm. It is a variation of heapsort developed by Edsger Dijkstra in 1981. Like heapsort, smoothsort's upper bound is O(n log n). The advantage of smoothsort is that it comes closer to O(n) time if the input is already sorted to some degree, whereas heapsort averages O(n log n) regardless of the initial sorted state. Amsterdam Compiler Kit (1983) The Amsterdam Compiler Kit (ACK) is a fast, lightweight and retargetable compiler suite and toolchain developed by Andrew Tanenbaum and Ceriel Jacobs at the Vrije Universiteit in Amsterdam. It is MINIX's native toolchain. The ACK was originally closed-source software (that allowed binaries to be distributed for MINIX as a special case), but in April 2003 it was released under an open-source BSD license. It has frontends for programming languages C, Pascal, Modula-2, Occam, and BASIC. The ACK's notability stems from the fact that in the early 1980s it was one of the first portable compilation systems designed to support multiple source languages and target platforms. Eight-to-fourteen modulation (1985) EFM (Eight-to-Fourteen Modulation) was invented by Dutch electrical engineer Kees A. Schouhamer Immink in 1985. EFM is a data encoding technique – formally, a channel code – used by CDs, laserdiscs and pre-Hi-MD MiniDiscs. MINIX (1987) MINIX (from "mini-Unix") is a Unix-like computer operating system based on a microkernel architecture. Early versions of MINIX were created by Andrew S. Tanenbaum for educational purposes. Starting with MINIX 3, the primary aim of development shifted from education to the creation of a highly reliable and self-healing microkernel OS. MINIX is now developed as open-source software. MINIX was first released in 1987, with its complete source code made available to universities for study in courses and research. It has been free and open-source software since it was re-licensed under the BSD license in April 2000. Tanenbaum created MINIX at the Vrije Universiteit in Amsterdam to exemplify the principles conveyed in his textbook, Operating Systems: Design and Implementation (1987), that Linus Torvalds described as "the book that launched me to new heights". Amoeba (operating system) (1989) Amoeba is a distributed operating system developed by Andrew S. Tanenbaum and others at the Vrije Universiteit in Amsterdam. The aim of the Amoeba project was to build a timesharing system that makes an entire network of computers appear to the user as a single machine. The Python programming language was originally developed for this platform. Python (programming language) (1989) Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows programmers to express concepts in fewer lines of code than would be possible in languages such as C++ or Java. The language provides constructs intended to enable clear programs on both a small and large scale. Python supports multiple programming paradigms, including object-oriented, imperative and functional programming or procedural styles. It features a dynamic type system and automatic memory management and has a large and comprehensive standard library. Python was conceived in the late 1980s and its implementation was started in December 1989 by Guido van Rossum at CWI in the Netherlands as a successor to the ABC language (itself inspired by SETL) capable of exception handling and interfacing with the Amoeba operating system. Van Rossum is Python's principal author, and his continuing central role in deciding the direction of Python is reflected in the title given to him by the Python community, benevolent dictator for life (BDFL). Vim (text editor) (1991) Vim is a text editor written by the Dutch free software programmer Bram Moolenaar and first released publicly in 1991. Based on the Vi editor common to Unix-like systems, Vim carefully separated the user interface from editing functions. This allowed it to be used both from a command line interface and as a standalone application in a graphical user interface. Blender (1995) Blender is a professional free and open-source 3D computer graphics software product used for creating animated films, visual effects, art, 3D printed models, interactive 3D applications and video games. Blender's features include 3D modeling, UV unwrapping, texturing, raster graphics editing, rigging and skinning, fluid and smoke simulation, particle simulation, soft body simulation, sculpting, animating, match moving, camera tracking, rendering, video editing and compositing. Alongside the modelling features it also has an integrated game engine. Blender has been successfully used in the media industry in several parts of the world including Argentina, Australia, Belgium, Brazil, Russia, Sweden, and the United States. The Dutch animation studio Neo Geo and Not a Number Technologies (NaN) developed Blender as an in-house application, with the primary author being Ton Roosendaal. The name Blender was inspired by a song by Yello, from the album Baby. EFMPlus (1995) EFMPlus is the channel code used in DVDs and SACDs, a more efficient successor to EFM used in CDs. It was created by Dutch electrical engineer Kees A. Schouhamer Immink, who also designed EFM. It is 6% less efficient than Toshiba's SD code, which resulted in a capacity of 4.7 gigabytes instead of SD's original 5 GB. The advantage of EFMPlus is its superior resilience against disc damage such as scratches and fingerprints. Economics Dutch East India Company The Dutch East India Company (Verenigde Oostindische Compagnie, or VOC), founded in 1602, was the world's first multinational, joint-stock, limited liability corporation – as well as its first government-backed trading cartel. It was the first company to issue shares of stock and what evolved into corporate bonds. The VOC was also the first company to actually issue stocks and bonds through a stock exchange. In 1602, the VOC issued shares that were made tradable on the Amsterdam Stock Exchange. This invention enhanced the ability of joint-stock companies to attract capital from investors as they could now easily dispose their shares. The company was known throughout the world as the VOC thanks to its logo featuring those initials, which became the first global corporate brand. The company's monogram also became the first global logo. First megacorporation (1602) The Dutch East India Company was arguably the first megacorporation, possessing quasi-governmental powers, including the ability to wage war, imprison and execute convicts, negotiate treaties, coin money and establish colonies. Many economic and political historians consider the Dutch East India Company as the most valuable, powerful and influential corporation in the world history. The VOC existed for almost 200 years from its founding in 1602, when the States-General of the Netherlands granted it a 21-year monopoly over Dutch operations in Asia until its demise in 1796. During those two centuries (between 1602 and 1796), the VOC sent almost a million Europeans to work in the Asia trade on 4,785 ships, and netted for their efforts more than 2.5 million tons of Asian trade goods. By contrast, the rest of Europe combined sent only 882,412 people from 1500 to 1795, and the fleet of the English (later British) East India Company, the VOC's nearest competitor, was a distant second to its total traffic with 2,690 ships and a mere one-fifth the tonnage of goods carried by the VOC. The VOC enjoyed huge profits from its spice monopoly through most of the 17th century. Dutch auction (17th century) A Dutch auction is also known as an open descending price auction. Named after the famous auctions of Dutch tulip bulbs in the 17th century, it is based on a pricing system devised by Nobel Prize–winning economist William Vickrey. In the traditional Dutch auction, the auctioneer begins with a high asking price which is lowered until some participant is willing to accept the auctioneer's price. The winning participant pays the last announced price. Dutch auction is also sometimes used to describe online auctions where several identical goods are sold simultaneously to an equal number of high bidders. In addition to cut flower sales in the Netherlands, Dutch auctions have also been used for perishable commodities such as fish and tobacco. First modern art market (17th century) The Dutch Republic was the birthplace of the first modern art market (open art market or free art market). The seventeenth-century Dutch were the pioneering arts marketers, successfully combining art and commerce together as we would recognise it today. Until the 17th century, commissioning works of art was largely the preserve of the church, monarchs and aristocrats. The emergence of a powerful and wealthy middle class in Holland, though, produced a radical change in patronage as the new Dutch bourgeoisie bought art. For the first time, the direction of art was shaped by relatively broadly-based demand rather than religious dogma or royal whim, and the result was a market which today's dealers and collectors would find familiar. With the creation of the first large-scale open art market, prosperous Dutch merchants, artisans, and civil servants bought paintings and prints in unprecedented numbers. Foreign visitors were astonished that even modest members of Dutch society such as farmers and bakers owned multiple works of art. Concept of corporate governance (17th century) The seventeenth-century Dutch businessmen were the pioneers in laying the basis for modern corporate governance. Isaac Le Maire, an Amsterdam businessman and a sizeable shareholder of the VOC, became the first recorded investor to actually consider the corporate governance's problems. In 1609, he complained of the VOC's shoddy corporate governance. On 24 January 1609, Le Maire filed a petition against the VOC, marking the first recorded expression of shareholder activism. In what is the first recorded corporate governance dispute, Le Maire formally charged that the directors (the VOC's board of directors – the Heeren XVII) sought to "retain another's money for longer or use it ways other than the latter wishes" and petitioned for the liquidation of the VOC in accordance with standard business practice. The first shareholder revolt happened in 1622, among Dutch East India Company (VOC) investors who complained that the company account books had been "smeared with bacon" so that they might be "eaten by dogs." The investors demanded a "reeckeninge," a proper financial audit. The 1622 campaign by the shareholders of the VOC is a testimony of genesis of CSR (Corporate Social Responsibility) in which shareholders staged protests by distributing pamphlets and complaining about management self enrichment and secrecy. Modern concept of foreign direct investment (17th century) The construction in 1619 of a train-oil factory on Smeerenburg in the Spitsbergen islands by the Noordsche Compagnie, and the acquisition in 1626 of Manhattan Island by the Dutch West India Company are referred to as the earliest cases of outward foreign direct investment (FDI) in Dutch and world history. Throughout the seventeenth century, the Dutch East India Company (VOC) and the Dutch West India Company (GWIC/WIC) also began to create trading settlements around the globe. Their trading activities generated enormous wealth, making the Dutch Republic one of the most prosperous countries of that time. The Dutch Republic's extensive arms trade occasioned an episode in the industrial development of early-modern Sweden, where arms merchants like Louis de Geer and the Trip brothers, invested in iron mines and iron works, another early example of outward foreign direct investment. First modern market-oriented economy (17th century) It was in the Dutch Republic that some important industries (economic sectors) such as shipbuilding, shipping, printing and publishing were developed on a large-scale export-driven model for the first time in history. The ship building district of Zaan, near Amsterdam, became the first industrialized area in the world, with around 900 industrial windmills at the end of the 17th century, but there were industrialized towns and cities on a smaller scale also. Other industries that saw significant growth were papermaking, sugar refining, printing, the linen industry (with spin-offs in vegetable oils, like flax and rape oil), and industries that used the cheap peat fuel, like brewing and ceramics (brickworks, pottery and clay-pipe making). The Dutch shipbuilding industry was of modern dimensions, inclining strongly toward standardised, repetitive methods. It was highly mechanized and used many labor-saving devices-wind-powered sawmills, powered feeders for saw, block and tackles, great cranes to move heavy timbers-all of which increased productivity. Dutch shipbuilding benefited from various design innovations which increased carrying capacity and cut costs. First capitalist nation-state (foundations of modern capitalism) (17th century) Economic historians consider the Netherlands as the first predominantly capitalist nation. The development of European capitalism began among the city-states of Italy, Flanders, and the Baltic. It spread to the European interstate system, eventually resulting in the world's first capitalist nation-state, the Dutch Republic of the seventeenth century. The Dutch were the first to develop capitalism on a nationwide scale (as opposed to earlier city states). They also played a pioneering role in the emergence of the capitalist world-system. Simon Schama aptly titled his work The Embarrassment of Riches, capturing the astonishing novelty and success of the commercial revolution in the Dutch Republic. World-systems theorists (including Immanuel Wallerstein and Giovanni Arrighi) often consider the economic primacy of the Dutch Republic in the 17th century as the first capitalist hegemony in world history (followed by hegemonies of the United Kingdom in the 19th century and the United States in the 20th century). First modern economic miracle (1585–1714) The Dutch economic transition from a possession of the Holy Roman Empire in the 1590s to the foremost maritime and economic power in the world has been called the "Dutch Miracle" (or "Dutch Tiger") by many economic historians, including K. W. Swart. Until the 18th century, the economy of the Dutch Republic was the most advanced and sophisticated ever seen in history. During their Golden Age, the provinces of the Northern Netherlands rose from almost total obscurity as the poor cousins of the industrious and heavily urbanised southern regions (Southern Netherlands) to become the world leader in economic success. The Netherlands introduced many financial innovations that made it a major economic force – and Amsterdam became the world center for international finance. Its manufacturing towns grew so quickly that by the middle of the century the Netherlands had supplanted France as the leading industrial nation of the world." Dynamic macroeconomic model (1936) Dutch economist Jan Tinbergen developed the first national comprehensive macroeconomic model, which he first built for the Netherlands and after World War II later applied to the United States and the United Kingdom. Fairtrade certification (1988) The concept of fair trade has been around for over 40 years, but a formal labelling scheme emerged only in the 1980s. At the initiative of Mexican coffee farmers, the world's first Fairtrade labeling organisation, Stichting Max Havelaar, was launched in the Netherlands on 15 November 1988 by Nico Roozen, Frans van der Hoff and Dutch ecumenical development agency Solidaridad. It was branded "Max Havelaar" after a fictional Dutch character who opposed the exploitation of coffee pickers in Dutch colonies. Finance Concept of bourse (13th century) An exchange, or bourse, is a highly organized market where (especially) tradable securities, commodities, foreign exchange, futures, and options contracts are sold and bought. The term bourse is derived from the 13th-century inn named Huis ter Beurze in Bruges, Low Countries, where traders and foreign merchants from across Europe conducted business in the late medieval period. The building, which was established by Robert van der Buerze as a hostelry, had operated from 1285. Its managers became famous for offering judicious financial advice to the traders and merchants who frequented the building. This service became known as the "Beurze Purse" which is the basis of bourse, meaning an organised place of exchange. Foundations of stock market (1602) The seventeenth-century Dutch merchants laid the foundations for modern stock market that now influences greatly the global economy. It was in the Dutch Republic that a fully-fledged stock market was established and developed for the first time in history. The Dutch merchants were also the pioneers in developing the basic techniques of stock trading. Although bond sales by municipalities and states can be traced to the thirteenth century, the origin of modern stock exchanges that specialize in creating and sustaining secondary markets in corporate securities goes back to the formation of the Dutch East India Company in the year 1602. Dutch investors were the first to trade their shares at a regular stock exchange. The Amsterdam Stock Exchange is considered the oldest in the world. It was established in 1602 by the Dutch East India Company for dealings in its printed stocks and bonds. Here, the Dutch also pioneered stock futures, stock options, short selling, debt-equity swaps, merchant banking, bonds, unit trusts and other speculative instruments. Unlike the competing companies, the VOC allowed anyone (including housemaids) to purchase stock in the trading at the fully operational Amsterdam Bourse. The practice of naked short selling was also invented in the Dutch Republic. In 1609, Isaac Le Maire, an Amsterdam merchant and a sizeable shareholder of the Dutch East India Company (VOC), became the first recorded short seller in history. The first recorded ban on short selling also took place in the Dutch Republic in the same year. In the early 17th century, Dutch merchants invented the common stock – that of the VOC. Also, the Dutch experienced the first recorded stock market crash in history, the Tulip Mania of 1636–1637. Since 1602, stock market trading has come a long way. But basically, the concept and principle of stock market trading is still upheld and is still being implemented up to now. First fully functioning (fully-fledged) financial market (17th century) The Dutch Republic (Amsterdam in particular) was the birthplace of the world's first fully functioning financial market, with the birth of a fully fledged capital market. Capital markets for debt and equity shares are used to raise long-term funds. New stocks and bonds are sold in primary markets (including initial public offerings) and secondary markets (including stock exchanges). While the Italian city-states produced the first transferable municipal bonds, they didn't develop the other ingredient necessary to produce a fully fledged capital market: corporate shareholders. The Dutch East India Company (VOC) became the first company to offer shares of stock to the general public. Dutch investors were the first to trade their shares at a regular stock exchange. In 1602 the Dutch East India Company (VOC) established an exchange in Amsterdam where the VOC stocks and bonds could be traded in a secondary market. The buying and selling of the VOC's securities (including shares and bonds) became the basis of the first official stock market. The Dutch were also the first to use a fully-fledged capital market (including bond market and stock market) to finance companies (such as the VOC and the WIC). It was in seventeenth-century Amsterdam that the global securities market began to take on its modern form. Foundations of corporate finance (17th century) What is now known as corporate finance has its modern roots in financial management policies of the Dutch East India Company (VOC) in the 17th century and some basic aspects of modern corporate finance began to appear in financial activities of Dutch businessmen in the early 17th century. Initial public offering (1602) The earliest form of a company which issued public shares was the publicani during the Roman Republic. In 1602, the Dutch East India Company (Vereenigde Oost-Indische Compagnie or VOC) became the first modern company to issue shares to the public, thus launching the first modern initial public offering (IPO). The VOC held the first public offering of shares in history shortly after its founding. With this first recorded initial public offering (IPO), the VOC brought in 6,424,588 guilders and the company subsequently grew to become the first true transnational corporation in the world. Institutional foundations of investment banking (17th century) The Dutch were the pioneers in laying the basis for investment banking, allowing the risk of loans to be distributed among thousands of investors in the early seventeenth century. Institutional foundations of central banking (first central bank) (1609) Prior to the 17th century most money was commodity money, typically gold or silver. However, promises to pay were widely circulated and accepted as value at least five hundred years earlier in both Europe and Asia. The Song dynasty was the first to issue generally circulating paper currency, while the Yuan dynasty was the first to use notes as the predominant circulating medium. In 1455, in an effort to control inflation, the succeeding Ming dynasty ended the use of paper money and closed much of Chinese trade. The medieval European Knights Templar ran an early prototype of a central banking system, as their promises to pay were widely respected, and many regard their activities as having laid the basis for the modern banking system. As the first public bank to "offer accounts not directly convertible to coin", the Bank of Amsterdam (Amsterdamsche Wisselbank or literally Amsterdam Exchange Bank) established in 1609 is considered to be the precursor to modern central banks, if not the first true central bank. The Wisselbank's innovations helped lay the foundations for the birth and development of modern central banking systems. There were earlier banks, especially in the Italian city-states, but the Wisselbank, with its public backing, provided for a scale of operations and stability hitherto unmatched. Along with a number of subsidiary local banks, it performed many of modern-day central banking functions. The model of the Wisselbank as a state bank was adapted throughout Europe, including the Bank of Sweden (1668) and the Bank of England (1694). It occupied a central position in the financial world of its day, providing an effective, efficient and trusted system for national and international payments. The establishment of the Wisselbank led to the introduction of the concept of bank money – the bank guilder. Lucien Gillard (2004) calls it the European guilder (le florin européen), and Adam Smith devotes many pages to explaining how the bank guilder works (Smith 1776: 446–55). Considered by many experts to be the first internationally dominant reserve currency of modern times, the Dutch guilder was the dominant currency during the 17th and 18th centuries. It was just replaced by British pound sterling in the 19th century and the US dollar took the lead just after World War Two and has held it until this day. Short selling (1609) Financial innovation in Amsterdam took many forms. In 1609, investors led by Isaac Le Maire formed history's first bear syndicate to engage in short selling, but their coordinated trading had only a modest impact in driving down share prices, which tended to be robust throughout the 17th century. Concept of dividend policy (1610) In the first decades of the 17th century, the VOC was the first recorded company ever to pay regular dividends. To encourage investors to buy shares, a promise of an annual payment (called a dividend) was made. An investor would receive dividends instead of interest and the investment was permanent in the form of shares in the company. Between 1600 and 1800 the Dutch East India Company (VOC) paid annual dividends worth around 18 percent of the value of the shares. First European banknote (1661) In 1656, King Charles X Gustav of Sweden signed two charters creating two private banks under the directorship of Johan Palmstruch (though before having been ennobled he was called Johan Wittmacher or Hans Wittmacher), a Riga-born merchant of Dutch origin. Palmstruch modeled the banks on those of Amsterdam where he had become a burgher. The first real European banknote was issued in 1661 by the Stockholms Banco of Johan Palmstruch, a private bank under state charter (precursor to the Sveriges Riksbank, the central bank of Sweden). First book ever on stock trading (1688) Joseph de la Vega, also known as Joseph Penso de la Vega, was an Amsterdam trader from a Spanish Jewish family and a prolific writer as well as a successful businessman. His 1688 book Confusion de Confusiones (Confusion of Confusions) explained the workings of the city's stock market. It was the earliest book about stock trading, taking the form of a dialogue between a merchant, a shareholder and a philosopher. The book described a market that was sophisticated but also prone to excesses, and de la Vega offered advice to his readers on such topics as the unpredictability of market shifts and the importance of patience in investment. The book has been described as the first precursor of modern behavioural finance, with its descriptions of investor decision-making still reflected in the way some investors operate today, and in 2001 was still rated by the Financial Times as one of the ten best investment book ever written. Concept of technical analysis (1688) The principles of technical analysis are derived from hundreds of years of financial market data. These principles in a raw form have been studied since the seventeenth century. Some aspects of technical analysis began to appear in Joseph de la Vega's accounts of the Dutch markets in the late 17th century. In Asia, technical analysis is said to be a method developed by Homma Munehisa during the early 18th century which evolved into the use of candlestick techniques, and is today a technical analysis charting tool. Concept of behavioral finance (1688) Josseph de la Vega was in 1688 the first person to give an account of irrational behaviour in financial markets. His 1688 book Confusion of Confusions, has been described as the first precursor of modern behavioural finance, with its descriptions of investor decision-making still reflected in the way some investors operate today. First modern model of a financial centre (17th century) By the first decades of the 18th century, Amsterdam had become the world's leading financial centre for more than a century, having developed a sophisticated financial system with central banking, fully-fledged capital markets, certain kinds of financial derivatives, and publicly traded multinational corporations. Amsterdam was the first modern model of an international (global) financial centre that now operated in several countries around the world. Foundations of modern financial system (17th century) In the early 17th century, the Dutch revolutionized domestic and international finance by inventing common stock – that of the Dutch East India Company and founding a proto-central bank, the Wisselbank or Bank of Amsterdam. In 1609, the Dutch had already had a government bond market for some decades. Shortly thereafter, the Dutch Republic had in place, in one form or another, all of the key components of a modern financial system: formalized public credit, stable money, elements of a banking system, a central bank of sorts and securities markets. The Dutch Republic went on to become that century's leading economy. Concept of investment fund (1774) The first investment fund has its roots back in 1774. A Dutch merchant named Adriaan van Ketwich formed a trust named Eendragt Maakt Magt. The name of Ketwich's fund translates to "unity creates strength". In response to the financial crisis of 1772–1773, Ketwich's aim was to provide small investors an opportunity to diversify (Rouwenhorst & Goetzman, 2005). This investment scheme can be seen as the first near-mutual fund. In the years following, near-mutual funds evolved and become more diverse and complex. Mutual fund (1774) The first mutual funds were established in 1774 in the Netherlands. Amsterdam-based businessman Abraham van Ketwich (a.k.a. Adriaan van Ketwich) is often credited as the originator of the world's first mutual fund. The first mutual fund outside the Netherlands was the Foreign & Colonial Government Trust, which was established in London in 1868. Foods and drinks Gibbing (14th century) Gibbing is the process of preparing salt herring (or soused herring), in which the gills and part of the gullet are removed from the fish, eliminating any bitter taste. The liver and pancreas are left in the fish during the salt-curing process because they release enzymes essential for flavor. The fish is then cured in a barrel with one part salt to 20 herring. Today many variations and local preferences exist on this process. The process of gibbing was invented by Willem Beuckelszoon (aka Willem Beuckelsz, William Buckels or William Buckelsson), a 14th-century Zealand Fisherman. The invention of this fish preservation technique led to the Dutch becoming a seafaring power. This invention created an export industry for salt herring that was monopolized by the Dutch. Doughnut (17th century) Many people believe it was the Dutch who invented doughnuts. A Dutch snack made from potatoes had a round shape like a ball, but, like Gregory's dough balls, needed a little longer time when fried to cook the inside thoroughly. These potato-balls developed into doughnuts when the Dutch finally made them into ring-shapes reduce frying time. Gin (jenever) (1650) Gin is a spirit which derives its predominant flavour from juniper berries (Juniperus communis). From its earliest origins in the Middle Ages, gin has evolved over the course of a millennium from a herbal medicine to an object of commerce in the spirits industry. Gin was developed on the basis of the older Jenever, and become widely popular in Great Britain when William III of Orange, leader of the Dutch Republic, occupied the British throne with his wife Mary. Today, the gin category is one of the most popular and widely distributed range of spirits, and is represented by products of various origins, styles, and flavour profiles that all revolve around juniper as a common ingredient. The Dutch physician Franciscus Sylvius is often credited with the invention of gin in the mid 17th century, although the existence of genever is confirmed in Massinger's play The Duke of Milan (1623), when Dr. Sylvius would have been but nine years of age. It is further claimed that British soldiers who provided support in Antwerp against the Spanish in 1585, during the Eighty Years' War, were already drinking genever (jenever) for its calming effects before battle, from which the term Dutch Courage is believed to have originated. The earliest known written reference to genever appears in the 13th century encyclopaedic work Der Naturen Bloeme (Bruges), and the earliest printed genever recipe from 16th century work Een Constelijck Distileerboec (Antwerp). Stroopwafel (1780s) A stroopwafel (also known as syrup waffle, treacle waffle or caramel waffle) is a waffle made from two thin layers of baked batter with a caramel-like syrup filling the middle. They were first made in Gouda in the 1780s. The traditional way to eat the stroopwafel is to place it atop of a drinking vessel with a hot beverage (coffee, tea or chocolate) inside that fits the diameter of the waffle. The heat from the rising steam warms the waffle and slightly softens the inside and makes the waffle soft on one side while still crispy on the other. Cocoa powder (foundations of modern chocolate industry) (1828) In 1815, Dutch chemist Coenraad van Houten introduced alkaline salts to chocolate, which reduced its bitterness. In the 1820s, Casparus van Houten, Sr. patented an inexpensive method for pressing the fat from roasted cocoa beans. He created a press to remove about half the natural fat (cacao butter) from chocolate liquor, which made chocolate both cheaper to produce and more consistent in quality. This innovation introduced the modern era of chocolate. Van Houten developed the first cocoa powder producing machine in the Netherlands. Van Houten's machine – a hydraulic press – reduced the cocoa butter content by nearly half. This created a "cake" that could be pulverized into cocoa powder, which was to become the basis of all chocolate products. The press separated the greasy cocoa butter from cacao seeds, leaving a purer chocolate powder behind. This powder, much like the instant cocoa powder used today, was easier to stir into milk and water. As a result, another very important discovery was made: solid chocolate. By using cocoa powder and low amounts of cocoa butter, it was then possible to manufacture chocolate bar. The term "chocolate" then came to mean solid chocolate, rather than hot chocolate. Dutch-process chocolate (1828) Dutch-processed chocolate or Dutched chocolate is chocolate that has been treated with an alkalizing agent to modify its color and give it a milder taste compared to "natural cocoa" extracted with the Broma process. It forms the basis for much of modern chocolate, and is used in ice cream, hot cocoa, and baking. The Dutch process was developed in the early 19th century by Dutch chocolate maker Coenraad Johannes van Houten, whose father Casparus is responsible for the development of the method of removing fat from cacao beans by hydraulic press around 1828, forming the basis for cocoa powder. Law and jurisprudence Doctrine of the Freedom of the Seas (foundations of the Law of the Sea/UNCLOS) (1609) In 1609, Hugo Grotius, the Dutch jurist who is generally known as the father of modern international law, published his book Mare Liberum (The Free Sea), which first formulated the notion of freedom of the seas. He developed this idea into a legal principle. It is said to be 'the first, and classic, exposition of the doctrine of the freedom of the seas' which has been the essence and backbone of the modern law of the sea. It is generally assumed that Grotius first propounded the principle of freedom of the seas, although all countries in the Indian Ocean and other Asian seas accepted the right of unobstructed navigation long before Grotius wrote his De Jure Praedae (On the Law of Spoils) in the year of 1604. His work sparked a debate in the seventeenth century over whether states could exclude the vessels of other states from certain waters. Grotius won this debate, as freedom of the seas became a universally recognized legal principle, associated with concepts such as communication, trade and peace. Grotius's notion of the freedom of the seas would persist until the mid-twentieth century, and it continues to be applied even to this day for much of the high seas, though the application of the concept and the scope of its reach is changing. Secularized natural law (foundations of modern international law) (1625) The publication of De jure belli ac pacis (On the Laws of War and Peace) by Hugo Grotius in 1625 had marked the emergence of international law as an 'autonomous legal science'. Grotius's On the Law of War and Peace, published in 1625, is best known as the first systematic treatise on international law, but to thinkers of the seventeenth and eighteenth centuries, it seemed to set a new agenda in moral and political philosophy across the board. Grotius developed pivotal treatises on freedom of the seas, the law of spoils, the laws of war and peace and he created an autonomous place for international law as its own discipline. Jean Barbeyrac's Historical and Critical Account of the Science of Morality, attached to his translation of Samuel von Pufendorf's Law of Nature and Nations in 1706, praised Grotius as "the first who broke the ice" of "the Scholastic Philosophy; which [had] spread itself all over Europe" (1749: 67, 66). Grotius' truly distinctive contribution to jurisprudence and philosophy of law (public international law or law of nations in particular) was that he secularized natural law. Grotius had divorced natural law from theology and religion by grounding it solely in the social nature and natural reason of man. When Grotius, considered by many to be the founder of modern natural law theory (or secular natural law), said that natural law would retain its validity 'even if God did not exist' (etiamsi daremus non-esse Deum), he was making a clear break with the classical tradition of natural law. Adam Smith, in lectures delivered in 1762 on the subject of moral philosophy and the law of nations, said that: "Jurisprudence is that science which inquires into the general principles which ought to be the foundation of laws of all nations. Grotius seems to have been the first who attempted to give the world anything like a regular system of natural jurisprudence, and his treatise, 'On the Laws of War and Peace, ' with all its imperfections, is perhaps at this day the most complete work on this subject." Grotian conception of international society (1625) The Grotian conception of international society became the most distinctive characteristic of the internationalist (or rationalist) tradition in international relations. This is why it is also called the Grotian tradition. According to it international politics takes place within international society in which states are bound not only by rules of prudence or expediency but also of morality and law. Grotius was arguably not the first to formulate such a doctrine. However, he was first to clearly define the idea of one society of states, governed not by force or warfare but by laws and mutual agreement to enforce those laws. As many international law scholars noted, the spirit of the Peace of Westphalia (1648) was preceded with the thoughts and ideas of Grotius. Thomas Franck observed: ‘Since the Reformation, the Peace of Westphalia, and the writings of Hugo Grotius, there has been an explicit assumption that the international system is an association of sovereign states.’ As Hedley Bull declared: ‘The idea of international society which Grotius propounded was given concrete expression in the Peace of Westphalia’, affirming that ‘Grotius must be considered the intellectual father of this first general peace settlement of modern times’. Cannon shot rule (1702) By the end of the seventeenth century, support was growing for some limitation to the seaward extent of territorial waters. What emerged was the so-called "cannon shot rule", which acknowledged the idea that property rights could be acquired by physical occupation and in practice to the effective range of shore-based cannon: about three nautical miles. The rule was long associated with Cornelis van Bijnkershoek, a Dutch jurist who, especially in his De Dominio Maris Dissertatio (1702), advocated a middle ground between the extremes of Mare Liberum and John Selden's Mare Clausum, accepting both the freedom of states to navigate and exploit the resources of the high seas and a right of coastal states to assert wide-ranging rights in a limited marine territory. Permanent Court of Arbitration (1899) The Permanent Court of Arbitration (PCA) is an international organization based in The Hague in the Netherlands. The court was established in 1899 as one of the acts of the first Hague Peace Conference, which makes it the oldest global institution for international dispute resolution. Its creation is set out under Articles 20 to 29 of the 1899 Hague Convention for the pacific settlement of international disputes, which was a result of the first Hague Peace Conference. The most concrete achievement of the Conference was the establishment of the PCA as the first institutionalized global mechanism for the settlement of disputes between states. The PCA encourages the resolution of disputes that involve states, state entities, intergovernmental organizations, and private parties by assisting in the establishment of arbitration tribunals and facilitating their work. The court offers a wide range of services for the resolution of international disputes which the parties concerned have expressly agreed to submit for resolution under its auspices. Dutch-Jew legal scholar Tobias Asser's role in the creation of the PCA at the first Hague Peace Conference (1899) earned him the Nobel Peace Prize in 1911. International Opium Convention (1912) The International Opium Convention, sometimes referred to as the Hague Convention of 1912, signed on 23 January 1912 at The Hague, was the first international drug control treaty and is the core of the international drug control system. The adoption of the Convention was a turning point in multilateralism, based on the recognition of the transnational nature of the drug problem and the principle of shared responsibility. Marriage equality (legalization of same-sex marriage) (2001) Denmark was the first state to recognize a legal relationship for same-sex couples, establishing "registered partnerships" very much like marriage in 1989. In 2001, the Netherlands became the first nation in the world to grant same-sex marriages. The first laws enabling same-sex marriage in modern times were enacted during the first decade of the 21st century. , sixteen countries (Argentina, Belgium, Brazil, Canada, Denmark, France, Iceland, Netherlands, New Zealand, Norway, Portugal, Spain, South Africa, Sweden, United Kingdom, Uruguay) and several sub-national jurisdictions (parts of Mexico and the United States) allow same-sex couples to marry. Polls in various countries show that there is rising support for legally recognizing same-sex marriage across race, ethnicity, age, religion, political affiliation, and socioeconomic status. Measurement Pendulum clock (first high-precision clock) (1656) The first mechanical clocks, employing the verge escapement mechanism with a foliot or balance wheel timekeeper, were invented in Europe at around the start of the 14th century, and became the standard timekeeping device until the pendulum clock was invented in 1656. The pendulum clock remained the most accurate timekeeper until the 1930s, when quartz oscillators were invented, followed by atomic clocks after World War 2. A pendulum clock uses a pendulum's arc to mark intervals of time. From their invention until about 1930, the most accurate clocks were pendulum clocks. Pendulum clocks cannot operate on vehicles or ships at sea, because the accelerations disrupt the pendulum's motion, causing inaccuracies. The pendulum clock was invented by Christiaan Huygens, based on the pendulum introduced by Galileo Galilei. Although Galileo studied the pendulum as early as 1582, he never actually constructed a clock based on that design. Christiaan Huygens invented pendulum clock in 1656 and patented the following year. He contracted the construction of his clock designs to clockmaker Salomon Coster, who actually built the clock. Concept of the standardization of the temperature scale (1665) Various authors have credited the invention of the thermometer to Cornelis Drebbel, Robert Fludd, Galileo Galilei or Santorio Santorio. The thermometer was not a single invention, however, but a development. However, each inventor and each thermometer was unique – there was no standard scale. In 1665 Christiaan Huygens suggested using the melting and boiling points of water as standards. The Fahrenheit scale is now usually defined by two fixed points: the temperature at which water freezes into ice is defined as 32 degrees Fahrenheit (°F), and the boiling point of water is defined to be , a 180-degree separation, as defined at sea level and standard atmospheric pressure. In 1742, Swedish astronomer Anders Celsius created a temperature scale which was the reverse of the scale now known by the name "Celsius": 0 represented the boiling point of water, while 100 represented the freezing point of water. From 1744 until 1954, 0 °C was defined as the freezing point of water and 100 °C was defined as the boiling point of water, both at a pressure of one standard atmosphere with mercury being the working material. Spiral-hairspring watch (first high-precision watch) (1675) The invention of the mainspring in the early 15th century allowed portable clocks to be built, evolving into the first pocketwatches by the 17th century, but these were not very accurate until the balance spring was added to the balance wheel in the mid 17th century. Some dispute remains as to whether British scientist Robert Hooke (his was a straight spring) or Dutch scientist Christiaan Huygens was the actual inventor of the balance spring. Huygens was clearly the first to successfully implement a spiral balance spring in a portable timekeeper. This is significant because up to that point the pendulum was the most reliable. This innovation increased watches' accuracy enormously, reducing error from perhaps several hours per day to perhaps 10 minutes per day, resulting in the addition of the minute hand to the face from around 1680 in Britain and 1700 in France. Like the invention of pendulum clock, Huygens' spiral hairspring (balance spring) system of portable timekeepers, helped lay the foundations for the modern watchmaking industry. The application of the spiral balance spring for watches ushered in a new era of accuracy for portable timekeepers, similar to that which the pendulum had introduced for clocks. From its invention in 1675 by Christiaan Huygens, the spiral hairspring (balance spring) system for portable timekeepers, still used in mechanical watchmaking industry today. Mercury thermometer (first practical, accurate thermometer) (1714) Various authors have credited the invention of the thermometer to Cornelis Drebbel, Robert Fludd, Galileo Galilei or Santorio Santorio. The thermometer was not a single invention, however, but a development. Though Galileo is often said to be the inventor of the thermometer, what he produced were thermoscopes. The difference between a thermoscope and a thermometer is that the latter has a scale. The first person to put a scale on a thermoscope is variously said to be Francesco Sagredo or Santorio Santorio in about 1611 to 1613. Before there was the thermometer, there was the earlier and closely related thermoscope, best described as a thermometer without a temperature scale. A thermoscope only showed the differences in temperatures, for example, it could show something was getting hotter. However, the thermoscope did not measure all the data that a thermometer could, for example an exact temperature in degrees. What can be considered the first modern thermometer, the mercury thermometer with a standardized scale, was invented by German-Dutch scientist Daniel Gabriel Fahrenheit (who had settled in Amsterdam in 1701) in 1714. Fahrenheit invented the first truly accurate thermometer using mercury instead of alcohol and water mixtures. He began constructing his own thermometers in 1714, and it was in these that he used mercury for the first time. Fahrenheit scale (first standardized temperature scale) (1724) Various authors have credited the invention of the thermometer to Cornelis Drebbel, Robert Fludd, Galileo Galilei or Santorio Santorio. The thermometer was not a single invention, however, but a development. However, each inventor and each thermometer was unique – there was no standard scale. In 1665 Christiaan Huygens suggested using the melting and boiling points of water as standards, and in 1694 Carlo Renaldini proposed using them as fixed points on a universal scale. In 1701 Isaac Newton proposed a scale of 12 degrees between the melting point of ice and body temperature. Finally in 1724 Daniel Gabriel Fahrenheit produced a temperature scale which now (slightly adjusted) bears his name. He could do this because he manufactured thermometers, using mercury (which has a high coefficient of expansion) for the first time and the quality of his production could provide a finer scale and greater reproducibility, leading to its general adoption. The Fahrenheit scale was the first widely used temperature scale. By the end of the 20th century, most countries used the Celsius scale rather than the Fahrenheit scale, though Canada retained it as a supplementary scale used alongside Celsius. Fahrenheit remains the official scale for Jamaica, the Cayman Islands, Belize, the Bahamas, Palau and the United States and associated territories. Snellen chart (1862) The Snellen chart is an eye chart used by eye care professionals and others to measure visual acuity. Snellen charts are named after Dutch ophthalmologist Hermann Snellen who developed the chart in 1862. Vision scientists now use a variation of this chart, designed by Ian Bailey and Jan Lovie. String galvanometer (1902) Previous to the string galvanometer, scientists used a machine called the capillary electrometer to measure the heart's electrical activity, but this device was unable to produce results at a diagnostic level. Dutch physiologist Willem Einthoven developed the string galvanometer in the early 20th century, publishing the first registration of its use to record an electrocardiogram in a Festschrift book in 1902. The first human electrocardiogram was recorded in 1887, however only in 1901 was a quantifiable result obtained from the string galvanometer. Schilt photometer (1922) In 1922, Dutch astronomer Jan Schilt invented the Schilt photometer, a device that measures the light output of stars and, indirectly, their distances. Medicine Clinical electrocardiography (first diagnostic electrocardiogram) (1902) In the 19th century it became clear that the heart generated electric currents. The first to systematically approach the heart from an electrical point-of-view was Augustus Waller, working in St Mary's Hospital in Paddington, London. In 1911 he saw little clinical application for his work. The breakthrough came when Einthoven, working in Leiden, used his more sensitive string galvanometer, than the capillary electrometer that Waller used. Einthoven assigned the letters P, Q, R, S and T to the various deflections that it measured and described the electrocardiographic features of a number of cardiovascular disorders. He was awarded the 1924 Nobel Prize for Physiology or Medicine for his discovery. Einthoven's triangle (1902) Einthoven's triangle is an imaginary formation of three limb leads in a triangle used in electrocardiography, formed by the two shoulders and the pubis. The shape forms an inverted equilateral triangle with the heart at the center that produces zero potential when the voltages are summed. It is named after Willem Einthoven, who theorized its existence. First European blood bank (1940) When German bombers attacked The Hague in 1940 while Willem Johan Kolff was there, he organised the first blood bank in continental Europe. It was located in the Zuidwal hospital in The Hague. Eleven patients were given blood transfusions in The Hague, six of whom survived. Donated blood was also used for victims of the bombardment of Rotterdam, whither it was transported by civilian car. Rotating drum dialysis machine (first practical artificial kidney) (1943) An artificial kidney is a machine and its related devices which clean blood for patients who have an acute or chronic failure of their kidneys. The first artificial kidney was developed by Dutchman Willem Johan Kolff. The procedure of cleaning the blood by this means is called dialysis, a type of renal replacement therapy that is used to provide an artificial replacement for lost kidney function due to renal failure. It is a life support treatment and does not treat disease. Artificial heart (1957) On 12 December 1957, Kolff implanted an artificial heart into a dog at Cleveland Clinic. The dog lived for 90 minutes. In 1967, Dr. Kolff left Cleveland Clinic to start the Division of Artificial Organs at the University of Utah and pursue his work on the artificial heart. Under his supervision, a team of surgeons, chemists, physicists and bioengineers developed an artificial heart and made it ready for industrial production. To help manage his many endeavors, Dr. Kolff assigned project managers. Each project was named after its manager. Graduate student Robert Jarvik was the project manager for the artificial heart, which was subsequently renamed the Jarvik-7. Based on lengthy animal trials, this first artificial heart was successfully implanted into the thorax of patient Barney Clark in December 1982. Clark survived 112 days with the device. Military Modern model of sea power (1585–1688) The Dutch Republic has been considered by many political and military historians as the first modern (global) sea power. The United Provinces of the Netherlands was the first state to possess the full triad of foreign commerce, forward bases and merchant and naval fleets. In the middle of the 17th century the Dutch navy was the most powerful navy in the world. The Dutch Republic had a commercial fleet that was larger than that of England, France, Germany, Portugal, and Spain combined. According to Walter Russell Mead, the "modern version of sea power was invented by the Dutch. The system of global trade, investment, and military power the Dutch built in the seventeenth century was the envy and the wonder of the world at the time, and many of its basic features were adopted by the British and the Americans in subsequent years." When the Peter the Great determined to achieve sea power for Imperial Russia, he came to the Dutch Republic to learn about shipbuilding, seamanship and nautical sciences. During his stay in Holland (1697) the Tsar engaged, with the help of Russian and Dutch assistants, many skilled workers such as builders of locks, fortresses, shipwrights and seamen. They had to help him with his modernization of Russia. The best-known sailor who made the journey from the Dutch Republic to Russia was Norwegian-Dutch Cornelius Cruys. Cruys performed well in Russia and came be regarded as the architect of the Russian Navy. He became the first commander of the Russian Baltic Fleet and the vice admiral of the Imperial Russian Navy. Peter the Great designed his new capital on the model of Amsterdam and gave it a Dutch name, Sint Pieterburgh (later Germanized into Sankt Peterburg). In St. Petersburg, there is an island which is still called Novaya Gollandiya (literally "New Holland"). The triangular man-made island took its name after a number of canals and shipbuilding facilities that rendered its appearance similar to Amsterdam. The Tsar chose to call his island "New Holland", commemorating his enthusiasm for all things Dutch. House of Orange-Nassau's military reforms (1590s–17th century) The early modern Military Revolution began with reforms inaugurated by Prince Maurice of Nassau with his cousins Count Willem Lodewijk of Nassau-Dillenburg and Count John VII of Nassau during the 1590s. Maurice developed a system of linear formations (linear tactics), discipline, drill and volley fire based on classical Roman methods that made his army more efficient and his command and control more effective. He also developed a 43-step drill for firing the musket that was included in an illustrated weapons manual by Jacob de Gheyn II in 1607 (Wapenhandelinghe or Exerise of Arms). This became known as the Dutch drill. It was widely read and emulated in the rest of Europe. Adopting and perfecting the techniques pioneered by Maurice of Nassau several decades earlier, Gustavus Adolphus repeatedly proved his techniques by defeating the armies of the Holy Roman empire(1630–1632), an adversary with resources fantastically larger than Sweden's during the Thirty Years' War. Descartes served for a while in the army of the Dutch military leader Prince Maurice of Orange-Nassau, and developed a fascination for practical technology. Maurice' s military innovations had considerable influences on Descartes' system of philosophy. Norden bombsight (1920s) The Norden bombsight was designed by Carl Norden, a Dutch engineer educated in Switzerland who emigrated to the U.S. in 1904. In 1920, he started work on the Norden bombsight for the United States Navy. The first bombsight was produced in 1927. It was essentially an analog computer, and bombardiers were trained in great secrecy on how to use it. The device was used to drop bombs accurately from an aircraft, supposedly accurate enough to hit a 100-foot circle from an altitude of 21,000 feet – but under actual combat situations, such an accuracy was never achieved. Submarine snorkel (1939) A submarine snorkel is a device that allows a submarine to operate submerged while still taking in air from above the surface. It was invented by the Dutchman J.J. Wichers shortly before World War II and copied by the Germans during the war for use by U-boats. Its common military name is snort. Goalkeeper CIWS (1975) Goalkeeper is a close-in weapon system (CIWS) still in use as of 2015. It is autonomous and completely automatic short-range defense of ships against highly maneuverable missiles, aircraft and fast maneuvering surface vessels. Once activated the system automatically performs the entire process from surveillance and detection to destruction, including selection of priority targets. Musical instruments Metronome (1812) The first (mechanical) metronome was invented by Dietrich Nikolaus Winkel in Amsterdam in 1812, but named (patented) after Johann Maelzel, who took the idea and popularized it. Fokker organ (1950) Dutch musician-physicist Adriaan Fokker designed and had built keyboard instruments capable of playing microtonal scales via a generalized keyboard. The best-known of these is his 31-tone equal-tempered organ, which was installed in Teylers Museum in Haarlem in 1951. It is commonly called the Fokker organ. Kraakdoos (1960s) The Kraakdoos or Cracklebox is a custom-made battery-powered noise-making electronic device. It is a small box with six metal contacts on top, which when pressed by fingers generates unusual sounds and tones. The human body becomes a part of the circuit and determines the range of sounds possible – different players generate different results. The concept was first conceived by Michel Waisvisz and Geert Hamelberg in the 1960s, and developed further in the 1970s when Waisvisz joined the STEIM foundation in Amsterdam. Moodswinger (2006) The Moodswinger is a twelve-string electric zither with an additional third bridge designed by Dutch luthier Yuri Landman. The rod functions as the third bridge and divides the strings into two sections to add overtones, creating a multiphonic sound. Springtime (guitar) (2008) The Springtime is an experimental electric guitar with seven strings and three outputs. Landman created the instrument in 2008. Philosophy and social sciences Neostoicism (1580s) Neostoicism was a syncretic philosophical movement, joining Stoicism and Christianity. Neostoicism was founded by Dutch-Flemish humanist Justus Lipsius, who in 1584 presented its rules, expounded in his book De Constantia (On Constancy), as a dialogue between Lipsius and his friend Charles de Langhe. The eleven years (1579–1590) that Lipsius spent in Leiden (Leiden University) were the period of his greatest productivity. It was during this time that he wrote a series of works designed to revive ancient Stoicism in a form that would be compatible with Christianity. The most famous of these is De Constantia (1584). Neostoicism had a direct influence on many seventeenth and eighteenth-century writers including Montesquieu, Bossuet, Francis Bacon, Joseph Hall, Francisco de Quevedo and Juan de Vera y Figueroa. Modern rationalism (1630s–1670s) The rise of modern rationalism in the Dutch Republic, had a profound influence on the 17th-century philosophy. Descartes is often considered to be the first of the modern rationalists. Descartes himself had lived in the Dutch Republic for some twenty years (1628–1649) and served for a while in the army of the Dutch military leader Prince Maurice of Orange-Nassau. The Dutch Republic was the first country in which Descartes' rationalistic philosophy (Cartesianism) succeeded in replacing Aristotelianism as the academic orthodoxy. Fritz Berolzheimer considers Hugo Grotius the Descartes of legal philosophy and notes Grotian rationalism's influence on the 17th-century jurisprudence: "As the Cartesian "cogito ergo sum" became the point of departure of rationalistic philosophy, so the establishment of government and law upon reason made Hugo Grotius the founder of an independent and purely rationalistic system of natural law." In the late 1650s Leiden was a place where one could study Cartesian philosophy. Sometime between 1656 and 1661 it appears that Spinoza did some formal study of philosophy at the University of Leiden. Philosophy of Spinoza (Spinozism) was a systematic answer to Descartes' famous dualist theory that the body and spirit are separate. Modern pantheism (1670s) Pantheism was popularized in the modern era as both a theology and philosophy based on the work of the 17th-century Dutch Jew philosopher Baruch Spinoza, whose Ethics was an answer to Descartes' famous dualist theory that the body and spirit are separate. Spinoza is regarded as the chief source of modern pantheism. Spinoza held that the two are the same, and this monism is a fundamental quality of his philosophy. He was described as a "God-intoxicated man," and used the word God to describe the unity of all substance. Although the term pantheism was not coined until after his death, Spinoza is regarded as its most celebrated advocate. Early liberalism (foundations of liberalism) (17th century) European liberalism, Isaiah Berlin wrote, "wears the appearance of a single coherent movement, little altered during almost three centuries, founded upon relatively simple foundations, laid by Locke or Grotius or even Spinoza; stretching back to Erasmus and Montaigne..." As Bertrand Russell noted in his A History of Western Philosophy (1945): "Descartes lived in Holland for twenty years (1629–49), except for a few brief visits to France and one to England, all on business. It is impossible to exaggerate the importance of Holland in the seventeenth century, as the one country where there was freedom of speculation. Hobbes had to have his books printed there; Locke took refuge there during the five worst years of reaction in England before 1688; Bayle (of the Dictionary) found it necessary to live there; and Spinoza would hardly have been allowed to do his work in any other country." Russell described early liberalism in Europe: "Early liberalism was a product of England and Holland, and had certain well-marked characteristics. It stood for religious toleration; it was Protestant, but of a latitudinarian rather than of a fanatical kind; it regarded the wars of religion as silly..." As Russell Shorto states: "Liberalism has many meanings, but in its classical sense it is a philosophy based on individual freedom. History has long taught that our modern sensibility comes from the eighteenth century Enlightenment. In recent decades, historians have seen the Dutch Enlightenment of the seventeenth century as the root of the wider Enlightenment." And at the center of this sits the city of Amsterdam." Amsterdam, to Shorto, was not only the first city in Europe to develop the cultural and political foundations of what we now call liberalism – a society focused on the concerns and comforts of individuals, run by individuals acting together, and tolerant of religion, ethnicity, or other differences – but also an exporter of these beliefs to the rest of Europe and the New World. Cartesianism (1630s–1640s) If Descartes is still considered the father of modern philosophy, Dutch Republic can be called its cradle. Cartesianism is the name given to the philosophical doctrine of René Descartes. Descartes is often regarded as the first thinker to emphasize the use of reason to develop the natural sciences. Cartesianism had been controversial for several years before 1656. Descartes himself had lived in the Dutch Republic for some twenty years (1628–1649). Descartes served for a while in the army of the Dutch military leader Prince Maurice of Orange-Nassau, and developed a fascination for practical technology. In the 1630s, while staying in the Dutch city Deventer, Descartes worked on a text which became published as Traite' de l'Homme (1664). Throughout his writing, he used words such as clock, automaton, and self – moving machine as interchangeable constructs. He postulated an account of the physical world that was thoroughly materialistic. His mechanical view of nature replaced the organism model which had been popular since the Renaissance. His Discours de la méthode (1637) was originally published at Leiden, and his Principia philosophiae (1644) appeared from the presses at Amsterdam. In the 1630s and 1640s, Descartes's ideas gained a foothold at the Dutch universities. Spinozism (1660s–1670s) Spinozism is the monist philosophical system of the Dutch-Jewish philosopher Baruch Spinoza which defines "God" as a singular self-subsistent substance, with both matter and thought as its attributes. Affect (philosophy) (1670s) Affect (affectus or adfectus in Latin) is a concept used in the philosophy of Spinoza and elaborated by Henri Bergson, Gilles Deleuze and Félix Guattari that emphasizes bodily experience. The term "affect" is central to what became known as the "affective turn" in the humanities and social sciences. Mandeville's paradox (1714) Mandeville's paradox is named after Bernard Mandeville, who shows that actions which may be qualified as vicious with regard to individuals have benefits for society as a whole. This is already clear from the subtitle of his most famous work, The Fable of The Bees: ‘Private Vices, Publick Benefits’. He states that "Fraud, Luxury, and Pride must live; Whilst we the Benefits receive.") (The Fable of the Bees, ‘The Moral’). Mathematical intuitionism (1907–1908) Mathematical intuitionism was founded by the Dutch mathematician and philosopher Luitzen Egbertus Jan Brouwer. In the philosophy of mathematics, intuitionism, or neointuitionism (opposed to preintuitionism), is an approach where mathematics is considered to be purely the result of the constructive mental activity of humans rather than the discovery of fundamental principles claimed to exist in an objective reality. That is, logic and mathematics are not considered analytic activities wherein deep properties of objective reality are revealed and applied, but are instead considered the application of internally consistent methods used to realize more complex mental constructs, regardless of their possible independent existence in an objective reality. Religion and ethics Devotio Moderna (1370s–1390s) Devotio Moderna, or Modern Devotion, was a movement for religious reform, calling for apostolic renewal through the rediscovery of genuine pious practices such as humility, obedience and simplicity of life. It began in the late fourteenth-century, largely through the work of Gerard Groote, and flourished in the Low Countries and Germany in the fifteenth century, but came to an end with the Protestant Reformation. Gerard Groote, father of the movement, founded the Brethren of the Common Life; after his death, disciples established a house of Augustinian Canons at Windesheim (near Zwolle, Overijssel). These two communities became the principal exponents of Devotio Moderna. Martin Luther studied under the Brethren of the Common Life at Magdeburg before going on to the University of Erfurt. Another famous member of the Brethren of the Common Life was Desiderius Erasmus of Rotterdam. Devotio Moderna, an undogmatic form of piety which some historians have argued helped to pave the road for the Protestant Reformation, is most known today through its influence on Thomas à Kempis, the author of The Imitation of Christ a book which proved highly influential for centuries. Mennonites (1536) The Mennonites are a Christian group based around the church communities of Anabaptist denominations named after Menno Simons (1496–1561) of Friesland. Through his writings, Simons articulated and formalized the teachings of earlier Swiss founders. The teachings of the Mennonites were founded on their belief in both the mission and Ministry of Jesus Christ, which they held to with great conviction despite persecution by various Roman Catholic and Protestant states. Dutch Reformed Church (1571) The Dutch Reformed Church (in Dutch: Nederlandse Hervormde Kerk or NHK) was a Reformed Christian denomination. It developed during the Protestant Reformation, with its base in what became known as the Roman Catholic Church. It was founded in the 1570s and lasted until 2004, the year it merged with the Reformed Churches in the Netherlands and the Evangelical Lutheran Church in the Kingdom of the Netherlands to form the Protestant Church in the Netherlands. Arminianism (1620) Arminianism is based on the theological ideas of Dutch Reformed theologian Jacobus Arminius (1560–1609) and his historic supporters known as the Remonstrants. His teachings held to the five solae of the Reformation, but they were distinct from the particular teachings of Martin Luther, Zwingli, John Calvin, and other Protestant Reformers. Arminius (Jacobus Hermanszoon) was a student of Beza (successor of Calvin) at the Theological University of Geneva. Many Christian denominations have been influenced by Arminian views on the will of man being freed by grace prior to regeneration, notably the Baptists in the 16th century, the Methodists in the 18th century and the Seventh-day Adventist Church. John Wesley was influenced by Arminianism. Also, Arminianism was an important influence in Methodism, which developed out of the Wesleyan movement. Some assert that Universalists and Unitarians in the 18th and 19th centuries were theologically linked with Arminianism. First synagogue to be established in the (Americas) New World (1636) The first synagogue of the New World, Kahal Zur Israel Synagogue, is founded in Recife, Brazil by the Dutch Jews. The Kahal Zur Israel Synagogue in Recife, Brazil, erected in 1636, was the first synagogue erected in the Americas. Its foundations have been recently discovered, and the 20th-century buildings on the site have been altered to resemble a 17th-century Dutch synagogue. Jansenism (1640s) Jansenism was a Catholic theological movement, primarily in France, that emphasized original sin, human depravity, the necessity of divine grace, and predestination. The movement originated from the posthumously published work (Augustinus) of the Dutch theologian Cornelius Jansen, who died in 1638. It was first popularized by Jansen's friend Abbot Jean Duvergier de Hauranne, of Saint-Cyran-en-Brenne Abbey, and after Duvergier's death in 1643, was led by Antoine Arnauld. Through the 17th and into the 18th centuries, Jansenism was a distinct movement within the Catholic Church. The theological centre of the movement was the convent of Port-Royal Abbey, Paris, which was a haven for writers including Duvergier, Arnauld, Pierre Nicole, Blaise Pascal, and Jean Racine. First Jewish congregation to be established in (the United States) North America (1654) Congregation Shearith Israel, the Spanish and Portuguese Synagogue in the City of New Amsterdam, was founded in 1654, the first Jewish congregation to be established in North America. Its founders were twenty-three Jews, mostly of Spanish and Portuguese origin, who had been living in Recife, Brazil. When the Portuguese defeated the Dutch for control of Recife, and brought with them the Inquisition, the Jews of that area left. Some returned to Amsterdam, where they had originated. Others went to places in the Caribbean such as St. Thomas, Jamaica, Surinam and Curaçao, where they founded sister Sephardic congregations. One group of twenty-three Jews, after a series of unexpected events, landed in New Amsterdam. After being initially rebuffed by anti-Semitic Governor Peter Stuyvesant, Jews were given official permission to settle in the colony in 1655. These pioneers fought for their rights and won permission to remain. This marks the founding of the Congregation Shearith Israel. Scientific instruments Telescope (optical telescope) (1608) The first historical records of a telescope appear in patents filed 1608 by Hans Lippershey and Jacob Metius. A description of Lippershey's instrument quickly reached Galileo Galilei, who created an improved version in 1609, with which he made the observations found in his Sidereus Nuncius of 1610. Huygens eyepiece (first compound eyepiece) (1670s) Huygens eyepieces consist of two plano-convex lenses with the plane sides towards the eye separated by an air gap. The lenses are called the eye lens and the field lens. The focal plane is located between the two lenses. It was invented by Christiaan Huygens in the late 1660s and was the first compound (multi-lens) eyepiece. Huygens discovered that two air spaced lenses can be used to make an eyepiece with zero transverse chromatic aberration. These eyepieces work well with the very long focal length telescopes (in Huygens day they were used with single element long focal length non-achromatic refracting telescopes, including very long focal length aerial telescopes). This optical design is now considered obsolete since with today's shorter focal length telescopes the eyepiece suffers from short eye relief, high image distortion, chromatic aberration, and a very narrow apparent field of view. Since these eyepieces are cheap to make they can often be found on inexpensive telescopes and microscopes. Because Huygens eyepieces do not contain cement to hold the lens elements, telescope users sometimes use these eyepieces in the role of "solar projection", i.e. projecting an image of the Sun onto a screen. Other cemented eyepieces can be damaged by the intense, concentrated light of the Sun. Microorganisms (1670s) Using an improved simple microscope, in 1673 Antonie van Leeuwenhoek becomes the first to discover, observe, describe, study and conduct scientific experiments with single-celled organisms, which he originally referred to as animalcules, and which now referred to as micro-organisms or microbes. For these observations he created at least 25 simple microscopes, of differing types, of which only nine survive. His simple microscopes were made of silver or copper frames, holding specially shaped single glass sphere that acted as a small lens. The smaller the sphere, the more in magnified. Those that have survived are capable of magnification up to 275 times. It is suspected that Van Leeuwenhoek possessed units that could magnify up to 500 times. Cycloidal pendulum (1673) The cycloid pendulum was invented by Christiaan Huygens in 1673. Its purpose is to eliminate the lack of isochronism of the ordinary simple pendulum. This is achieved by making the mass point move on a cycloid instead of a circular arc. Pyrometer (1739) The pyrometer, invented by Pieter van Musschenbroek, is a temperature measuring device. A simple type uses a thermocouple placed either in a furnace or on the item to be measured. The voltage output of the thermocouple is read from a meter. Many different types of thermocouple are available, for measuring temperatures from −200 °C to above 1500 °C. Leyden jar (first practical capacitor) (1745–1746) A Leyden jar, or Leiden jar, is a device that "stores" static electricity between two electrodes on the inside and outside of a glass jar. It was the original form of a capacitor (originally known as a "condenser"). It was invented independently by German cleric Ewald Georg von Kleist on 11 October 1745 and by Dutch scientist Pieter van Musschenbroek of Leiden (Leyden) in 1745–1746. The invention was named for the city. The Leyden jar was used to conduct many early experiments in electricity, and its discovery was of fundamental importance in the study of electricity. Previously, researchers had to resort to insulated conductors of large dimensions to store a charge. The Leyden jar provided a much more compact alternative. Like many early electrical devices, there was no particular use for the Leyden jar at first, other than to allow scientists to do a greater variety of electrical experiments. Benjamin Franklin, for example, used a Leyden jar to store electricity from lightning in his famous kite experiment in 1752. By doing so he proved that lightning was really electricity. The idea for the Leyden jar was discovered independently by two parties: German scientist and jurist Ewald Georg von Kleist, and Dutchmen Pieter van Musschenbroek and Andreas Cunaeus. These scientists developed the Leyden jar while working under a theory of electricity that saw electricity as a fluid, and hoped to develop the jar to "capture" this fluid. In 1744 von Kleist lined a glass jar with silver foil, and charged the foil with a friction machine. Kleist was convinced that a substantial electric charge could be collected when he received a significant shock from the device. The effects of this "Kleistian jar" were independently discovered around the same time by Dutch scientists Pieter van Musschenbroek and Cunaeus at the University of Leiden. Van Musschenbroek communicated on it with the French scientific community where it was called the Leyden jar. Eisinga Planetarium (1781) The Eisinga Planetarium (Royal Eise Eisinga Planetarium) was built by Eise Eisinga in his home in Franeker, Friesland. It took Eisinga seven years to build his planetarium, completing it in 1781. The orrery still exists and is the world's oldest working planetarium. Kipp's apparatus (1860) Kipp's apparatus, also called a Kipp generator, is designed for preparation of small volumes of gases. It was invented around 1860 by Dutch pharmacist Petrus Jacobus Kipp and widely used in chemical laboratories and for demonstrations in schools into the second half of the 20th century. Phase contrast microscope (1933) In optical microscopy many objects such as cell parts in protozoans, bacteria and sperm tails are essentially fully transparent unless stained (and therefore killed). The difference in densities and composition within these objects however often gives rise to changes in the phase of light passing through them, hence they are sometimes called "phase objects". Using the phase-contrast technique makes these structures visible and allows the study of living specimens. This phase contrast technique proved to be such an advancement in microscopy that Dutch physicist Frits Zernike was awarded the Nobel Prize in 1953. Magnetic horn (1961) The magnetic horn (also known as the Van der Meer horn) is a high-current, pulsed focusing device, invented by the Dutch physicist Simon van der Meer at CERN. It selects pions and focuses them into a sharp beam. Its original application was in the context of neutrino physics, where beams of pions have to be tightly focused. When the pions then decay into muons and neutrinos or antineutrinos, an equally well-focused neutrino beam is obtained. The muons were stopped in a wall of 3000 tons of iron and 1000 tons of concrete, leaving the neutrinos or antineutrinos to reach the Gargamelle bubble chamber. Sports and games Kolf (forerunner of modern golf) (13th century) A golf-like game (kolf in Dutch) is recorded as taking place on 26 February 1297, in a city called Loenen aan de Vecht, where the Dutch played a game with a stick and leather ball. The winner was whoever hit the ball with the fewest strokes into a target several hundred yards away. Some scholars argue that this game of putting a small ball in a hole in the ground using clubs was also played in 17th-century Netherlands and that this predates the game in Scotland. Figure skating (prototype) (15th–17th centuries) The Dutch played a significant role in the history of ice skating (including speed skating and figure skating). The first feature of ice skating in a work of art was made in the 15th century. The picture, depicted Saint Lidwina, patron saint of ice skaters, falling on the ice. Another important aspect is a man seen in the background, who is skating on one leg. This means that his skates must have had sharp edges similar to those found on modern ice skates. Until the 17th century, ice skating was mostly used for transportation. Some of the Stuarts (including King Charles II of England) who had fled to the Dutch Republic during the Cromwell Royal reign later returned to Britain, bringing with them the new sport. Upon his return to England in 1658, the King brought two innovations in ice skating – a pair of iron skates and the Dutch roll. The Dutch roll was the first form of a gliding or skating motion made possible by the iron skate's two edges. However, speed skating was the focus of the Dutch, while the English developed modern figure skating. Speed skating (15th–17th centuries) Speed skating, which had developed in the Netherlands in the 17th century, was given a boost by the innovations in skate construction. Speed skating, or speedskating, is a competitive form of skating in which skaters race each other over a certain distance. Types of speed skating are long track speed skating, short track speed skating and marathon speed skating. In the modern Olympic Games, long-track speed skating is usually referred to as just "speed skating", while short-track speed skating is known as "short track". Yachting (sport sailing) (17th century) Sailing, also known as yachting, is a sport in which competitors race from point to point, or around a race course, in sail-powered boats. Yachting refers to recreational sailing or boating, the specific act of sailing or using other water vessels for sporting purposes. The invention of sailing is prehistoric, but the racing of sailing boats is believed to have started in the Netherlands some time in the 17th century. While living in the Dutch Republic, King Charles II of England fell in love with sailing and in 1660, took home the Dutch gifted 66-foot yacht he called Mary. The sport's popularity spread across the British Isles. The world's first yacht club was founded in Cork, Ireland in 1720. International Skating Union (1892) The International Skating Union (ISU) is the international governing body for competitive ice skating disciplines, including figure skating, synchronized skating, speed skating, and short track speed skating. It was founded in Scheveningen, Netherlands, in 1892, making it the oldest governing international winter sport federation and one of the oldest international sport federations. The first official World Championships in Speed Skating (open to men only) directly under the auspices of the ISU were held in Amsterdam in 1893. Korfball (1902) Korfball (Korfbal in Dutch) is a mixed gender team sport, with similarities to netball and basketball. A team consists of eight players; four female and four male. A team also includes a coach. It was founded in the Netherlands in 1902 by Nico Broekhuysen. Cruyff Turn (1974) The Cruijff Turn (also known as Cruyff Turn), is a famous dribbling trick in football, was perfected by the Dutch football player Johan Cruijff for whom the evasive trick was named. To make this move, the player first looks to pass or cross the ball. However, instead of kicking it, he drags the ball behind his planted foot with the inside of his other foot, turns through 180 degrees and accelerates away. The trick was famously employed by Cruijff in the 1974 FIFA World Cup, first seen in the Dutch match against Sweden and soon widely copied. Total Football (1970s) The foundations for Total Football (Dutch: totaalvoetbal) were laid by Englishman Jack Reynolds who was the manager of AFC Ajax. Rinus Michels, who played under Reynolds, later became manager of Ajax and refined the concept into what is known today as "Total Football" (Totaalvoetbal in Dutch language), using it in his training for the Ajax Amsterdam squad and the Netherlands national football team in the 1970s. It was further refined by Stefan Kovacs after Michels left for FC Barcelona. Johan Cruyff was the system's most famous exponent. Due to Cruyff's style of play, he is still referred to as the total footballer. Its cornerstone was a focus on positional interchange. The invention of totaalvoetbal helped lay the foundations for the significant successes of Dutch football at both club and international level in the 1970s. During that decade, the Dutch football rose from almost total obscurity to become a powerhouse in world football. In an interview published in the 50th anniversary issue of World Soccer magazine, the captain of the Brazilian team that won the 1970 FIFA World Cup, Carlos Alberto, went on to say: "The only team I’ve seen that did things differently was Holland at the 1974 World Cup in Germany. Since then everything looks more or less the same to me…. Their ‘carousel’ style of play was amazing to watch and marvellous for the game." Tiki-taka (1990s) FC Barcelona and the Spain national football team play a style of football known as Tiki-taka that has its roots in Total Football. Johan Cruyff founded Tiki-taka (commonly spelled tiqui-taca in Spanish) during his time as manager of FC Barcelona (1988–1996). The style was successfully adopted by the all-conquering Spain national football team (2008–2012) and Pep Guardiola's Barcelona team (2009–2011). Tiki-taka style differs from Total Football in that it focuses on ball movement rather than positional interchange. Technology and engineering First pound lock in Europe (1373) The Netherlands revived the construction of canals during the 13th–14th century that had generally been discontinued since the fall of the Roman Empire. They also contributed in the development of canal construction technology, such as introducing the first flash locks in Europe. The first pound lock in Europe was built by the Dutch in 1373 at Vreeswijk, where a canal from Utrecht joins the river Lek. Thermostat (automatic temperature regulator) (1620s) Around the 1620s, Cornelis Drebbel developed an automatic temperature control system for a furnace, motivated by his belief that base metals could be turned to gold by holding them at a precise constant temperature for long periods of time. He also used this temperature regulator in an incubator for hatching chickens. Feedback control system (1620s) Feedback control has been used for centuries to regulate engineered systems. In the 17th century, Drebbel invented one of the earliest devices to use feedback, a chicken incubator that used a damper controlled by a thermostat to maintain a constant temperature. Magic lantern (first practical image projector; the forerunner of modern slide projector) (1659) The magic lantern is an optical device, an early type of image projector developed in the 17th century. People have been projecting images using concave mirrors and pin-hole cameras (camera obscura) since Roman times. But glass lens technology wasn't sufficiently developed to make advanced optical devices (such as telescope and microscope) until the 17th century. With pinhole cameras and camera obscura it was only possible to project an image of actual scene, such as an image of the sun, on a surface. The magic lantern on the other hand could project a painted image on a surface, and marks the point where cameras and projectors became two different kinds of devices. There has been some debate about who the original inventor of the magic lantern is, but the most widely accepted theory is that Christiaan Huygens developed the original device in the late 1650s. However, other sources give credit to the German priest Athanasius Kircher. He describes a device such as the magic lantern in his book Ars Magna Lucis et Umbrae. Huygens is credited because of his major innovation in lantern technology, which was the replacement of images etched on mirrors from earlier lanterns such as Kircher's with images painted on glass. This is what paved the way for the use of colour and for double-layered slide projections (generally used to simulate movement). The first allusion to a 'magic lantern' is by Huygens in the 1650s and he is generally credited with inventing it – though he didn't want to admit it, considering it frivolous. Huygens was the first to describe a fully functioning magic lantern, one he made, and wrote about it in a work in 1659. Huygens magic lantern has been described as the predecessor of today's slide projector and the forerunner of the motion picture projector. Images were hand painted onto the glass slide until the mid-19th century when photographic slides were employed. Huygens introduced this curiosity to the Danish mathematician Thomas Walgenstein who realized its commercial value for entertainment and traveled through Europe – mostly France and Italy – demonstrating his machine to foreign princes and selling them replicas for their own amusement. The forerunner of the modern slide projector as well as moving pictures, magic lanterns retained their popularity for centuries and were also the first optical toy to be used for family entertainment in the home. Fire hose (1673) In Amsterdam, the Superintendent of the Fire Brigade, Jan van der Heyden, and his son Nicholaas took firefighting to its next step with the fashioning of the first fire hose in 1673. Gunpowder engine (first practical rudimentary internal combustion piston engine) (1678–80) A gunpowder engine, also known as an explosion engine or Huygens' engine, is a type of internal combustion engine using gunpowder as its fuel. It was considered essentially as the first rudimentary internal combustion piston engine. The concept was first explored during the 17th century, most notably by the Dutch scientist Christiaan Huygens. In 1678 he outlined a gunpowder engine consisting of a vertical tube containing a piston. Gunpowder was inserted into the tube and lit through a small hole at the base, like a cannon. The expanding gasses would drive the piston up the tube until it reached a point near the top. Here, the piston uncovered holes in the tube that allowed any remaining hot gasses to escape. The weight of the piston and the vacuum formed by the cooling gasses in the now-closed cylinder drew the piston back into the tube, lifting a test mass to provide power. According to sources, a single example of this sort of engine was built in 1678 or 79 using a cannon as the cylinder. The cylinder was held down to a base where the gunpowder sat, making it a breech loading design. The gasses escaped via two leather tubes attached at the top of the barrel. When the piston reached them the gasses blew the tubes open, and when the pressure fell, gravity pulled the leather down causing the tubes droop to the side of the cylinder, sealing the holes. Huygens’ presented a paper on his invention in 1680, A New Motive Power by Means of Gunpowder and Air. By 1682, the device had successfully shown that a dram (1/16th of an ounce) of gunpowder, in a cylinder seven or eight feet high and fifteen or eighteen inches in diameter, could raise seven or eight boys (or about 1,100 pounds) into the air, who held the end of the rope. Hollander beater (1680s) The Hollander beater is a machine developed by the Dutch in 1680 to produce pulp from cellulose-containing plant fibers. It replaced stamp mills for preparing pulp because the Hollander could produce in one day the same quantity of pulp that a stamp mill could produce in eight. Gas lighting (1783) In 1783, Maastricht-born chemist Jan Pieter Minckelers used coal gas for lighting and developed the first form of gas lighting. Meat slicer (1898) A meat slicer, also called a slicing machine, deli slicer or simply a slicer, is a tool used in butcher shops and delicatessens to slice meats and cheeses. The first meat slicer was invented by Wilhelm van Berkel (Wilhelmus Adrianus van Berkel) in Rotterdam in 1898. Older models of meat slicer may be operated by crank, while newer ones generally use an electric motor. Pentode (1926) A pentode is an electronic device having five active electrodes. The term most commonly applies to a three-grid vacuum tube (thermionic valve), which was invented by the Dutchman Bernhard D.H. Tellegen in 1926. Philishave (1939) Philishave was the brand name for electric shavers manufactured by the Philips Domestic Appliances and Personal Care unit of Philips (in the US, the Norelco name is used). The Philishave shaver was invented by Philips engineer Alexandre Horowitz, who used rotating cutters instead of the reciprocating cutters that had been used in previous electric shavers. Gyrator (1948) A gyrator is a passive, linear, lossless, two-port electrical network element invented by Tellegen as a hypothetical fifth linear element after the resistor, capacitor, inductor and ideal transformer. Traffic enforcement camera (1958) Dutch company Gatsometer BV, founded by the 1950s rally driver Maurice Gatsonides, invented the first traffic enforcement camera. Gatsonides wished to better monitor his speed around the corners of a race track and came up with the device in order to improve his time around the circuit. The company developed the first radar for use with road traffic and is the world's largest supplier of speed-monitoring camera systems. Because of this, in some countries speed cameras are sometimes referred to as "Gatsos". They are also sometimes referred to as "photo radar", even though many of them do not use radar. The first systems introduced in the late 1960s used film cameras, replaced by digital cameras beginning in the late 1990s. Variomatic (1958) Variomatic is the stepless, fully automatic transmission of the Dutch car manufacturer DAF, originally developed by Hub van Doorne. The Variomatic was introduced in 1958 (DAF 600), the first automatic gear box made in the Netherlands. It continues in use in motorscooters. Variomatic was the first commercially successful continuously variable transmissions (CVT). Red light camera (1965) A Red light camera is a traffic enforcement camera that captures an image of a vehicle that enters an intersection against a red traffic light. By automatically photographing such vehicles, the camera produces evidence that assists authorities in their enforcement of traffic laws. The first red light camera system was introduced in 1965, using tubes stretched across the road to detect the violation and trigger the camera. One of the first developers of these red light camera systems was Dutch company Gatsometer BV. Stochastic cooling (1968) Stochastic cooling is a form of particle beam cooling. It is used in some particle accelerators and storage rings to control the emission of particle beams. This process uses the electrical signals that the individual charged particles generate in a feedback loop to reduce the tendency of individual particles to move away from other particles in the beam. This technique was invented and applied at the Intersecting Storage Rings, and later the Super Proton Synchrotron, at CERN in Geneva, Switzerland by Dutch physicist Simon van der Meer. By increasing the particle density to close to the required energy, this technique improved the beam quality and, inter alia, brought the discovery of W and Z bosons within reach. Clap skate (1980) The clap skate (also called clapskates, slap skates, slapskates) is a type of ice skate used in speed skating. Clap skates were developed at the Faculty of Human Movement Sciences of the Vrije Universiteit of Amsterdam, led by Gerrit Jan van Ingen Schenau, although the idea is much older. van Ingen Schenau, who started work on a hinged speed skate in 1979, created his first prototype in 1980 and finished his PhD thesis on the subject in 1981 using the premise that a skater would benefit from extended movement keeping the blade on the ice, allowing the calf muscles more time to exert force. Cremulator (1981) The Cremulator is a machine developed by the Dutch company ALL Europe in 1981. The Cremulator is used after cremation, about 3 kg of ashes remain on average. These ash residues are reduced in a cremulator for subsequent scattering or in an urn. Also called asmill. The Cremulator is now further developed by DFW Europe as cremation equipment manufacturer in The Netherlands. Transportation Ice skate improvements (14th–15th centuries) In the 14th century, the Dutch started using wooden platform skates with flat iron bottom runners. The skates were attached to the skater's shoes with leather straps and poles were used to propel the skater. Around 1500, the Dutch shifted to a narrow metal double edged blade, so the skater could now push and glide with his feet, eliminating the need for a pole. Herring Buss (15th century) A herring buss () was a type of seagoing fishing vessel, used by Dutch and Flemish herring fishermen in the 15th through early 19th centuries. The Buis was first adapted for use as a fishing vessel in the Netherlands, after the invention of gibbing made it possible to preserve herring at sea. This made longer voyages feasible, and hence enabled Dutch fishermen to follow the herring shoals far from the coasts. The first herring buss was probably built in Hoorn around 1415. The last one was built in Vlaardingen in 1841. Yacht (1580s) Originally defined as a light, fast sailing vessel used by the Dutch navy to pursue pirates and other transgressors around and into the shallow waters of the Low Countries. Later, yachts came to be perceived as luxury, or recreational vessels. Fluyt (16th century) Fluyt, a type of sailing vessel originally designed as a dedicated cargo vessel. Originating from the Netherlands in the 16th century, the vessel was designed to facilitate transoceanic delivery with the maximum of space and crew efficiency. The inexpensive ship could be built in large numbers. This ship class was credited with enhancing Dutch competitiveness in international trade and was widely employed by the Dutch East India Company in the 17th and 18th centuries. The fluyt was a significant factor in the 17th century rise of the Dutch seaborne empire. Wind-powered sawmill (1592) Cornelis Corneliszoon was the inventor of the wind-powered sawmill. Prior to the invention of sawmills, boards were rived and planed, or more often sawn by two men with a whipsaw using saddleblocks to hold the log and a pit for the pitman who worked below and got the benefit of sawdust in his eyes. Sawing was slow and required strong and durable sawmen. The topsawer had to be the stronger of the two because the saw was pulled in turn by each man, and the lower had the advantage of gravity. The topsawyer also had to guide the saw to produce a plank of even thickness. This was often done by following a chalkline. Early sawmills adapted the whipsaw to mechanical power, generally driven by a water wheel to speed up the process. The circular motion of the wheel was changed to back-and-forth motion of the saw blade by a pitman thus introducing a term used in many mechanical applications. A pitman is similar to a crankshaft used in reverse. A crankshaft converts back-and-forth motion to circular motion. Generally only the saw was powered and the logs had to be loaded and moved by hand. An early improvement was the development of a movable carriage, also water powered, to steadily advance the log through the saw blade. Schooner (prototype) (17th century) A schooner is a type of sailing vessel with fore-and-aft sails on two or more masts, the foremast being no taller than the rear mast(s). Such vessels were first used by the Dutch in the 16th or 17th century (but may not have been called that at the time). Schooners first evolved from a variety of small two-masted gaff-rigged vessels used in the coast and estuaries of the Netherlands in the late 17th century. Most were working craft but some pleasure yachts with schooner rigs were built for wealthy merchants and Dutch nobility. Following arrival of the Dutch-born prince William III the Orange on the British throne, the British Royal Navy built a Royal yacht with a schooner rig in 1695, HMS Royal Transport. This vessel, captured in a detailed Admiralty model, is the earliest fully documented schooner. Royal Transport was quickly noted for its speed and ease of handling and mercantile vessels soon adopted the rig in Europe and in European colonies in North America. Schooners were immediately popular with colonial traders and fishermen in North America with the first documented reference to a schooner in America appearing in Boston port records in 1716. North American shipbuilders quickly developed a variety of schooner forms for trading, fishing and privateering. According to the language scholar Walter William Skeat, the term schooner comes from scoon, while the sch spelling comes from the later adoption of the Dutch spelling ("schoener"). Another study suggests that a Dutch expression praising ornate schooner yachts in the 17th century, "een schoone Schip", may have led to the term "schooner" being used by English speakers to describe the early versions of the schooner rig as it evolved in England and America. Land yacht (1600) The Wind chariot or land yacht (Zeilwagen) was designed by Flemish-born mathematician & engineer Simon Stevin for Prince Maurice of Orange. Land yacht. It offered a carriage with sails, of which a little model was preserved in Scheveningen until 2012. Around the year 1600, Stevin, Maurice and twenty-six others used it on the beach between Scheveningen and Petten. The carriage was propelled solely by force of wind, and traveled faster than horse-drawn vehicles. First verified practical (navigable) submarine (1620) A replica of reduced scale of Drebbel's submarine built by the team of the TV-series "Building the Impossible" (2002). Cornelius Drebbel was the inventor of the first navigable submarine, while working for the British Royal Navy. He designed and manufactured a steerable submarine with a leather-covered wooden frame. Between 1620 and 1624 Drebbel successfully built and tested two more, successively larger vessels. The third model had 6 oars and could carry 16 passengers. This model was demonstrated to King James I and several thousand Londoners. The submarine stayed submerged for three hours and could travel from Westminster to Greenwich and back, cruising at a depth of from . This submarine was tested many times in the Thames, but never used in battle. In 2002, the British boatbuilder Mark Edwards built a wooden submarine based on the original 17th-century version by Drebbel. This was shown in the BBC TV programme Building the Impossible in November 2002. It is a scale working model of the original and was built using tools and construction methods common in 17th century boat building and was successfully tested under water with two rowers at Dorney Lake, diving beneath the surface and being rowed underwater for 10 minutes. Legal considerations prevented its use on the River Thames itself. First ever car equipped with a six-cylinder engine, along with four-wheel drive (1903) Spyker is credited with building and racing the first ever four-wheel racing car in 1903. The first four-wheel-drive car, as well as hill-climb racer, with internal combustion engine, the Spyker 60 H.P., was presented in 1903 by Dutch brothers Jacobus and Hendrik-Jan Spijker of Amsterdam. The two-seat sports car, which was also the first ever car equipped with a six-cylinder engine, is now an exhibit in the Louwman Collection (the former Nationaal Automobiel Museum) at the Hague in The Netherlands. Others First practical national anthem (Het Wilhelmus) (1574) Wilhelmus van Nassouwe (Het Wilhelmus) is the national anthem of the Netherlands and is the oldest national anthem in the world. The anthem was first written down in 1574 (during the Dutch Revolt). The Japanese anthem, Kimigayo, has the oldest (9th century) lyrics, but a melody was only added in the late 19th century, making it a poem rather than an anthem for most of its lifespan. Although the Wilhelmus was not officially recognised as the Dutch national anthem until 1932, it has always been popular with parts of the Dutch population and resurfaced on several occasions in the course of Dutch history before gaining its present status. Notes References External links Daily Dutch Innovation Cosmos: A Personal Voyage, Episode 6: Travellers' Tales (Documentary TV Series by Carl Sagan): Part 1 (YouTube link) Part 2 (YouTube link) Part 3 (YouTube link) Full (YouTube link) Civilisation, chapter 8/13: The Light of Experience (Documentary TV Series by Kenneth Clark) . Lists of inventions or discoveries Inventions and discoveries . Science and technology in the Dutch Republic Science and technology in the Netherlands Dutch exploration in the Age of Discovery Expeditions from the Netherlands List of Dutch inventions Scientific revolution
4481188
https://en.wikipedia.org/wiki/AOSS
AOSS
AOSS (AirStation One-Touch Secure System) is a system by Buffalo Technology which allows a secure wireless connection to be set up with the push of a button. AirStation residential gateways incorporated a button on the unit to let the user initiate this procedure. AOSS was designed to use the maximum level of security available to both connecting devices including both Wired Equivalent Privacy (WEP) and Wi-Fi Protected Access (WPA). Connection Process Association Phase: Once AOSS has been initiated on both devices via the AOSS button, the access point will change its SSID to "ESSID-AOSS" and the client will attempt to connect to it. Both devices will attempt connection for two minutes. Connection will be made using a secret 64-bit WEP key known to both devices. Key Generation Phase: With both devices connected, the AP generates and transfers a unique key to the client where an RC4 tunnel is created. The AP creates four SSIDs and encryption keys for AES, TKIP, WEP128, and WEP64 generated from a random key script. These keys are available in the user interface of the AOSS AP to be used with non-AOSS clients. Information Exchange Phase: The client notifies the AP of its encryption support. Key Transfer Phase: All four encryption keys are transmitted to the client regardless of encryption support allowing the client to change the SSID if needed. The user does not have access to the keys through the client device. Reboot Stack: The AP applies the SSID and key for the highest level of encryption supported by the client and reboots. The previously used WEP64 and RC4 tunnel are no longer used. The client adapter will automatically reboot or re-initialize and connect to the SSID using the proper encryption key. If a subsequent AOSS process connects with a lesser wireless encryption standard, the AP will apply the lesser standard and the Reboot Stack phase will be repeated for all connected devices. Compatible products The Nintendo Wi-Fi Connection used by the Nintendo DS, Nintendo DS Lite, Nintendo DSi and Nintendo 3DS is AOSS compatible. AOSS is also included in the PlayStation 3 and in system software 2.00 for the PlayStation Portable and is available for automatic setup with the PSP with firmware 2.80. The PS Vita is also AOSS compatible as of release. AOSS functionality for all Wii models is included in version 3.0U of the Wii System software, released in August 2007 along with PAL version 3.0E. The technology was advertised through 2009. See also Wi-Fi Protected Setup References Wi-Fi
40930
https://en.wikipedia.org/wiki/Compatibility
Compatibility
Compatibility may refer to: Computing Backward compatibility, in which newer devices can understand data generated by older devices Compatibility card, an expansion card for hardware emulation of another device Compatibility layer, components that allow for non-native support of components Compatibility mode, software mechanism in which a software emulates an older version of software Computer compatibility, of a line of machines IBM PC compatible, computers that are generally similar to the original IBM PC, XT, or AT Hardware compatibility, between different pieces of computer hardware License compatibility, of software licenses Pin compatibility, in devices that have the same functions assigned to the same particular pins Software compatibility, between different pieces of software Software incompatibility Science and mathematics Compatibility (biological), a property which is assigned to splits of a given set of taxa Compatibility (chemical), how stable a substance is when mixed with another substance Compatibility (geochemistry), how readily a particular trace element substitutes for a major element within a mineral Compatibility (mechanics), the study of compatible deformations in continuum mechanics Electromagnetic compatibility, which studies the unintentional generation, propagation, and reception of electromagnetic energy Consistency, logical compatibility between two or more propositions Other uses Astrological compatibility, a branch of astrology that studies relationships by comparing natal horoscopes Compatibilism, a philosophical position Interpersonal compatibility, the long-term interaction between two or more individuals in terms of the ease and comfort of communication
61021101
https://en.wikipedia.org/wiki/Michael%20Osinski
Michael Osinski
Michael Osinski (born 1954) is a former Wall Street computer programmer who developed a software that played a role in the subprime mortgage-fueled crisis from 2007 to 2009. The software allowed the bundling of home mortgages into bonds, which precipitated the subprime loans collapse that sparked the global economic meltdown. Early life and career Osinski was born in 1954 and grew up in Mobile, Alabama. Prior to his work on Wall Street, Osinski was first involved in data entry. He became a lead programmer for a telecommunications company in the early 1980s and oversaw a team that wrote a system for tracking daily sales. However, he lost his job once the software became stable. During the 1990s, Osinski worked for Lehman Brothers as a computer strategist. He helped develop BondTalk, which is a language capable of modeling collateralized mortgage obligations. Financial crisis The 2008 global financial crisis was caused by the crash in bonds associated with U.S. home loans, which were bundled with other loans and packaged as safe investments. This is referred to as mortgage securitization and was first introduced in 1983. For the first time, however, a software was developed by Osinski—together with his wife Isabel—that streamlined the process through the creation of an intricate network of bonds that are based on homeowners' payments. This new system, a product of 13 years of development, spread on Wall Street and allowed the exploitation of the market and consumers. First, the software became one of the tools used by banks and traders to obtain short-term gain but, later, these were used as "dangerous, volatile, mispriced financial weapons." There was a significant increase in available mortgage credit due to subprime lending but problems (e.g. increased risk of foreclosure) emerged due to relaxed credit standards. Retirement and farming At 45, when his software was sold, Osinski retired and set up the Widow's Hole Oyster Farm. He and his wife bought a Long Island waterfront property in 1999, which they used to cultivate oysters. The Osinskis also actively campaign against the importation of oysters from Europe, citing its risk to the American shellfish industry. References 1954 births Living people People from Mobile, Alabama Lehman Brothers people American computer programmers
18985311
https://en.wikipedia.org/wiki/Imageboard
Imageboard
An imageboard is a type of Internet forum that revolves around the posting of images, often alongside text and discussion. The first imageboards were created in Japan as an extension of the textboard concept. These sites later inspired the creation of a number of English-language imageboards, such as 4chan. Characteristics Imageboards, similar to bulletin board systems, are used for discussions of a variety of topics. The primary focus of imageboards, however, is directed away from text posts, and is instead placed on picture posts. The two share many of the same structures, including separate forums for separate topics, as well as similar audiences. Imageboards are much more transitory with content—on some boards (especially highly trafficked ones), the thread deletion time can be as little as 10 minutes. In Japan, where imageboards are more common, topics will vary widely, ranging from trains to current news. The most popular English language imageboard, 4chan, similarly has a large variety of topics. Imageboards are also different from online galleries in that most of the works posted are not made by the poster, but instead are taken from other online sources: galleries, other imageboards, and edited pictures. Tripcodes Most imageboards and 2channel-style discussion boards allow (and encourage) anonymous posting and use a system of tripcodes instead of registration. A tripcode is the hashed result of a password that allows one's identity to be recognized without storing any data about users. Entering a particular password will let one "sign" one's posts with the tripcode generated from that password. Trying to take another user's tripcode and compute their password from it (for instance, to make posts that appear to come from a particular person) is somewhat computationally difficult. For those who want a custom tripcode, however, there are custom tripcode generators (which are technically tripcode crackers) available, such as Meriken's Tripcode Engine and MTY_CL. In general, anonymity is considered to be one of the advantages of an imageboard, and some boards have from time to time removed the ability to post with a name altogether (known as "forced anonymous/anonymity"). Secure tripcodes Due to the fact tripcodes can be cracked given enough time, some imageboards, such as 4chan and 8chan, implement a "secure" tripcode. Such tripcodes are not reproducible across different imageboards; they work by prepending a secure salt to the tripcode, barring intrusion, known only to the server owner. They therefore function closer to a username than to a cryptographic signature; this is why QAnon could not verify themselves on another website when 8chan went down in late 2019. Software There are two primary types of imageboard software packages in widespread use: linearly directed imageboards modeled closely after Futaba Channel (in which content is posted through hierarchical subsections of topical interest, usually denoted by a forward slash such as "/f" for female), and nonlinear imageboards modeled after Danbooru (usually indicated by the usage of controlled folksonomic vocabulary for topical tagging and search). Futaba Channel clones There are currently several Futaba-inspired imageboard software packages in widespread usage today, including Vichan, Kusaba/Kusaba X, and TinyIB. Historical Futallaby is a PHP script based on the Futaba script from Futaba Channel. Although the Futallaby source is still freely available at 1chan, it is no longer in development, and the download page recommends using Wakaba instead, stating that "Wakaba can do everything Futallaby does and so much more." Futallaby started as a translation of Futaba, later retooled to support XHTML and customizable CSS styles. Wakaba is a Perl imageboard script with a SQL backend to store thread information. It is designed to be more efficient and cleanly written than other scripts that are available, while preserving the same kind of functionality. Because of its focus on bare-bones functionality, Wakaba lacks many of the modern amenities provided by 4chan's Yotsuba, and newer imageboard scripts. A few users have attempted to remedy this by forking the original project and adding in features they consider beneficial. Two FOSS examples of this are frankusr's Wakaba fork, and the user experience focused Glaukaba. Kusaba was a PHP-based modular imageboard software, which used MySQL. The creator has discontinued the project, in favor of developing TinyIB instead. Kusaba X was a continuation of Kusaba by a separate developer. Like Kusaba, Kusaba X was written in PHP, and was designed with modularity in mind. It requires a MySQL or SQLite database to run. Kusaba and its derivatives were at one time some of the most popular imageboard solutions. Kusaba X has been implicitly discontinued, as it has not been updated since 2013, and the website is unavailable. Tinyboard is a PHP based imageboard script with a MySQL backend. It was eventually discontinued and forked into vichan,. Tinyboard (and its forks) notably feature extensibility with JavaScript. Infinity is a fork of Vichan that allows user board creation. It is the basis for 8kun. Infinity has been discontinued, however it has been forked into OpenIB, which remains supported. Active Taimaba is 420chan's proprietary continuation of Wakaba, overhauling it with many modern features and a public API. It is a hybrid Perl and PHP platform, as compared to Wakaba's static html output. This fork remains in use today, however the source has never been released. TinyIB is a PHP-based imageboard software from the original developer of Kusaba, intended to be "lightweight and efficient". It supports MySQL, PostgreSQL, SQLite, and the lack of a database completely ("Flatfile"). Started in 2009, it still sees major active development as of 2021. Vichan is a fork of Tinyboard "with more features and that receives security patches". It officially supports up to PHP 7.0, and though it has security support, the development team strongly urges users to "consider other imageboard packages" due to having "next to no active development". LainChan is a fork of Vichan, intending to "actively build on it and add a lot of features and other improvements". It supports up to PHP 7.3. Lynxchan is an imageboard based on JavaScript and Node.js. It requires a MongoDB database to run. Danbooru-style boards Usually referred to as a "booru" (plural "boorus"). Unlike Futaba-inspired imageboard software packages, Danbooru and derivatives aim for a non-hierarchical semantic structure in which users are able to post content and add tags, annotations, translations, artist commentary, and comments. There is also Deep Danbooru, which loads images from danbooru sites and automatically tags them. There exist a number of different Danbooru-style imageboards, both those with shared source code and those that are not released for others to use. The two main Danbooru derivatives are Gelbooru and MyImouto. Imageboards 2ch Dvach, is a Russian imageboard that replaced 2ch.ru imageboard (originally known as dvach) which was shut down earlier on January 17, 2009; it thoroughly copied original layout and was heavily advertised over the internet and managed to succeed the original one in popularity. According to its owners number of posts left in the /b/ board exceeded 150 million. In September 2016 a pro-Russian government organisation, MAIL.RU, helped to organize "defense" against alleged DDOS attacks that took place during the same month; events raised concerns and speculations among users who grew suspicious over alleged takeover committed by the mail.ru and who criticized the owner's controversial decision to accept "help". As of October 2018 It was widely believed that imageboard was simply "sold" on undisclosed terms to pro-government organisation. The decision was met with high criticism of risks of disclosure of users' credentials in inherently anonymous-community to the government body that could potentially violate principles of anonymity, causing many to leave the board by the end of 2016. By 2019 it remains among the largest active Russian-speaking imageboards. 420chan An English-language imageboard based on cannabis culture which was created on 20 April 2005 by Aubrey Cottle. The name is a reference to the larger 4chan and the code term 420 of the cannabis subculture. Its boards include various drug-specific boards, as well as a board featuring a chatbot named Netjester. 4chan 4chan is an English-language imageboard based on the Japanese imageboard Futaba Channel. This imageboard is based primarily upon the posting of pictures (generally related to a wide variety of topics, from anime and popular culture to politics and sports) and their discussion. The Guardian describes it as "at once brilliant, ridiculous and alarming." The site and its userbase have received attention from the media for a number of reasons, including attacks against Hal Turner on his Internet shows, distributed denial-of-service attacks against eBaum's World, taking part in Project Chanology, and multiple cases of anti-animal abuse reports. Many Internet memes have originated there, including lolcats, rickrolling, and Pedobear. 8chan 8chan (or Infinitechan) was a primarily English-language imageboard, although it has sub-boards dedicated to other languages. Just like 4chan, 8chan is based on posting pictures and discussion anonymously, but unlike 4chan, 8chan lets its users decide what they want to discuss by allowing any user to create their own board dedicated to any topic, a concept first made popular by news bulletin boards like Reddit. 8chan also claims to have a strong dedication to freedom of speech and allows all content—so long as the discussion and board creation abides by United States law. However, local moderators enforce the rules of their own boards and may delete posts as they see fit. It is currently partnered with the Japanese textboard 2channel. Futaba Channel Futaba Channel (ふたば☆ちゃんねる), or "Futaba" for short, is a popular, anonymous BBS and imageboard system based in Japan. Its boards usually do not distinguish between not safe for work and clean content, but there is a strict barrier between two-dimensional (drawn) and three-dimensional (computer graphics (CG) and photographic) pictures that is heavily enforced and debated. Hispachan Hispachan was launched in November 2012 as a global imageboard for all Spanish-speaking countries. Vice Magazine describes it as "a site for completely anonymous Spanish-language discussion that has proven popular among hackers since its launch in 2012". In January 2017, a shooting in a school in Monterrey (Mexico) was previously announced on Hispachan. In June 2019, Vice accused some Hispachan discussions of being "collaborative misogyny". Karachan Karachan is the largest Polish imageboard at 20 million posts, founded in 2010. Karachan has received attention from the Polish media after many trolling actions targeting Polish politicians, journalists and the Pope John Paul II. As of July 2019 the site consists of a (Polish-language) faux page claiming the site is "blocked" due to "invalid content". However, a manual is known to exist, informing how to enter the actual forum and browse its contents. Its name comes from the Polish word karaczan, which means a cockroach, an insect that is used as Karachan's logo. Krautchan A primarily German-language imageboard that was founded in 2007. The name is an allusion to the ethnophaulism Kraut for Germans. Unlike most imageboards, posters on Krautchan didn't publish their postings under the generic name "Anonymous". The German name "Bernd" was used instead, and the Krautchan community identified themselves as "Bernds" instead of "Anons". In 2009, after the Winnenden school shooting, the interior minister of Baden-Württemberg cited a post on the imageboard in a press conference that appeared to forewarn of the shooting, but was later found to be fake. The site also featured a popular English-language board, /int/, which was also the origin of the Polandball internet phenomenon and a number of other popular memes such as Wojak in August the same year. On March 21, 2018, the imageboard was shut down. Two days later the imageboard kohlchan was founded as a replacement. Attempts to view the latter via Google translate are redirected to NSFW content. Wizardchan Wizardchan is an imageboard primarily dedicated to male virgin culture and topics including anime, hobbies, and depression. Users on the depression board often discuss suicide or self-harm, and a controversy emerged in the board's community about whether to refer users to suicide prevention hotlines. See also Anonymous (group) Textboard Citations External links Overscript—a list of imageboard software AllChans - OverChan V.3—a list of imageboards Internet forum terminology Internet culture Internet forums
15530641
https://en.wikipedia.org/wiki/DLD
DLD
DLD can refer to: Data Retention Directive in Norwegian (Datalagringsdirektivet) Democratic League of Dardania, a political party in Kosovo Demon Lord Dante, an anime and manga series Deutsche Linux-Distribution (German Linux Distribution), a Linux distribution produced from 1992 to 1999 Dihydrolipoamide dehydrogenase Digital Life Design, a conference network DLD (software), a library package for the C programming language that performs dynamic link editing Geilo Airport, Dagali, IATA code DLD Developmental language disorder , Mexican rock band
2699657
https://en.wikipedia.org/wiki/Rogue%20access%20point
Rogue access point
A rogue access point is a wireless access point that has been installed on a secure network without explicit authorization from a local network administrator, whether added by a well-meaning employee or by a malicious attacker. Dangers Although it is technically easy for a well-meaning employee to install a "soft access point" or an inexpensive wireless router—perhaps to make access from mobile devices easier—it is likely that they will configure this as "open", or with poor security, and potentially allow access to unauthorized parties. If an attacker installs an access point they are able to run various types of vulnerability scanners, and rather than having to be physically inside the organization, can attack remotely—perhaps from a reception area, adjacent building, car park, or with a high gain antenna, even from several miles away. Prevention and detection To prevent the installation of rogue access points, organizations can install wireless intrusion prevention systems to monitor the radio spectrum for unauthorized access points. Presence of a large number of wireless access points can be sensed in airspace of a typical enterprise facility. These include managed access points in the secure network plus access points in the neighborhood. A wireless intrusion prevention system facilitates the job of auditing these access points on a continuous basis to learn whether there are any rogue access points among them. In order to detect rogue access points, two conditions need to be tested: whether or not the access point is in the managed access point list whether or not it is connected to the secure network The first of the above two conditions is easy to test—compare wireless MAC address (also called as BSSID) of the access point against the managed access point BSSID list. However, automated testing of the second condition can become challenging in the light of following factors: a) Need to cover different types of access point devices such as bridging, NAT (router), unencrypted wireless links, encrypted wireless links, different types of relations between wired and wireless MAC addresses of access points, and soft access points, b) necessity to determine access point connectivity with acceptable response time in large networks, and c) requirement to avoid both false positives and negatives which are described below. False positives occur when the wireless intrusion prevention system detects an access point not actually connected to the secure network as wired rogue. Frequent false positives result in wastage of administrative bandwidth spent in chasing them. Possibility of false positives also creates hindrance to enabling automated blocking of wired rogues due to the fear of blocking friendly neighborhood access point. False negatives occur when the wireless intrusion prevention system fails to detect an access point actually connected to the secure network as wired rogue. False negatives result in security holes. If an unauthorized access point is found connected to the secure network, it is the rogue access point of the first kind (also called as “wired rogue”). On the other hand, if the unauthorized access point is found not connected to the secure network, it is an external access point. Among the external access points, if any is found to be mischievous or a potential risk (e.g., whose settings can attract or have already attracted secure network wireless clients), it is tagged as a rogue access point of the second kind, which is often called an "evil twin". Soft access point A "soft access point" (soft AP) can be set up on a Wi-Fi adapter using for example Windows' virtual Wi-Fi or Intel's My WiFi. This makes it possible, without the need of a physical Wi-Fi router, to share the wired network access of one computer with wireless clients connected to that soft AP. If an employee sets up such a soft AP on their machine without coordinating with the IT department and shares the corporate network through it, then this soft AP becomes a rogue AP. See also Man-in-the-middle attack Wireless intrusion prevention system MAC spoofing Wireless LAN Wireless security Legality of piggybacking References External links Roguescanner - Open source network based rogue access point detection Wireless networking Computer network security Wireless access points
1184108
https://en.wikipedia.org/wiki/Hoverspeed
Hoverspeed
Hoverspeed was a ferry company that operated on the English Channel from 1981 until 2005. It was formed in 1981 by the merger of Seaspeed and Hoverlloyd. Its last owners were Sea Containers; the company ran a small fleet of two high-speed SeaCat catamaran ferries in its final year. Hoverspeed played a part in developing the hovercraft, and ran six SR.N4 Mountbatten class hovercraft and one SEDAM N500 Naviplane. Hoverspeed last operated hovercraft on its Dover to Calais service. They were withdrawn on 1 October 2000 and Hoverspeed continued to use Seacat catamarans built by Incat. Background and formation Early attempts to consolidate operations During the early 1970s, when both Hoverlloyd and Seaspeed were struggling to return a profit, the two operators had been in negotiations on a partnership to amalgamate operations. However, management at Hoverlloyd was not convinced the UK government would sanction any form of arrangement between Seaspeed and a foreign company. The situation was exacerbated when discussions became public knowledge and plans for a consortium were quickly abandoned. Loss-making company In late 1981, when the two companies eventually merged, the situation was dire. Despite a valuation at £110 million, combined losses were £8 million with ticket prices 25 to 30% higher than the ferries. Under the terms of the merger, Hoverspeed was also under obligation to accept the two French hovercraft in exchange for a 10% participation in share capital by French state-owned SNCF. The new company was spearheaded by Gerry Draper, new Chief Executive and a former marketing director at British Airways. Draper had been involved in filling empty passenger seats aboard the new Boeing 747 jumbo jets in the early 1970s when IATA regulations prohibited discounting. He was also successful in turning Concorde services profitable. Nevertheless, a number of early decisions plagued the new company. First, Hoverspeed inherited an antiquated reservation system which was inadequate, resulting in potential travellers having great difficulty in contacting Hoverspeed and many being told crossings were fully booked when they were not. This necessitated the reversion to a very basic manual reservation system to try to cope with demand. In 1982, loss income was estimated at between £3 million and £4 million. Second, excess capacity drove profit margins down. The most damaging mistake was to increase the number of crossings operated, over 10,000 in 1982, which did not match demand and the decision to briefly re-open the Ramsgate route for the summer season which was counter-productive. Third, parity pricing continued with ferry operators, even during peak season. This was a source of concern since 70% of the turnover (and traffic) was generated during the summer season between mid-June and mid-September. Despite carrying 2.5 million passengers and 400,000 vehicles, a 21% market share, with 35% fewer flights and 250 staff made redundant, the new entity continued to register losses with £5.5 million for the year 1982, £3.5 million in 1983. The new French hovercraft, the N500, achieved only 60% reliability and did not meet ride comfortability or controllability and eventually broken up for spares and scraps. The SR.N4 crafts, moreover, could not accommodate the recently introduced double-deck and one-and-a-half deck coaches and this part of the market was lost. By 1984, the company was near collapse. Management buy-out In February 1984, the UK government refused to provide further guaranteed loans, British Rail sold its 50% ownership which it had retained in the company (and its losses) for a nominal sum of £1 to a syndicate consortium of 5 directors. Thus, Hoverspeed was effectively given away to its own management and was wholly owned within the private sector backed by merchant bank Kleinwort Benson providing guarantees and underwriting the cash needed to operate via NatWest. The company immediately adopted premium instead of parity pricing, justified on the grounds that a faster service was expected to be more expensive. An aggressive advertising campaign was mounted against the ferries and more effort was made to target fares accurately. For 1984, the company made a loss of £621,000. Its performance went up with a pre-tax profit of £194,000 for 1985 with an increase in US passengers. This figure rose to £625,000 in 1986. Purchase by British Ferries Ltd Ripe for sale, in February 1986, Hoverspeed was sold by the management consortium to British Ferries (Sealink UK's holding company) owned by Sea Containers, a transport group engaged in marine container leasing, manufacturing, depot and logistics operations, railways operator, ferry operator and leisure industry investor for a sum of £5 million, each syndicate making a profit of £600,000. Large financial gains made by former managers of British Rail was to become a feature of rail privatization. In 1987, Hoverspeed returned a profit of £1.6 million with a turnover of £42.3 million per year and £4.7 million in 1988 and a turnover of £44 million per year. Being part of a larger shipping company allowed fresh re-capitalization, cheaper fuel as well as access to legal services. With the reduced fleet of hovercraft aging, it also permitted capital investment into more fuel efficient vessels with the first SeaCat catamaran services introduced in 1991 and larger Super SeaCats in 1997, the latter to achieve economies of scale in the face of stiff competition from the Channel Tunnel and the ferry companies. Routes Hoverspeed operated several routes. These were: Dover – Calais (1981–2005; Hovercraft, Seacat and SuperSeaCat service): Hoverspeed's primary service was established by British Rail owned Seaspeed in 1968 with the car carrying SRN 4 Hovercraft The Princess Anne and The Princess Margaret. Seaspeed were not the first hovercraft service between Dover and Calais, Townsend Car Ferries had operated a passenger-only SR.N6 in 1966. Hoverlloyd also ran hovercraft services in 1966 to Calais but from Ramsgate. Seacats were introduced on the route on 2 June 1991. It was intended that the Seacats would replace the remaining SRN 4 hovercraft but this did not happen until 2000. The Dover – Calais service has seen all members of the Hoverspeed fleet operation on it at some point. The route closed on 7 November 2005, the Seacat Diamant operating the last crossing. Dover – Boulogne-sur-Mer (1968–1993; Hovercraft and Seacat service): the Dover-Boulogne route was opened in 1966 by Townsend Car Ferries using an SR.N6 passenger-only hovercraft. In 1993 Hoverspeed closed the Dover-Boulogne route after deciding that the Hoverport at Le Portel (near Boulogne-sur-Mer) was no longer required, and the SeaCat operation moved to nearby Folkestone. Dover – Ostend (1998–2003; Seacat and Superseacat service): after the failure of the Holyman Sally Line service from Ramsgate to Ostend, Holyman became partners with Hoverspeed and moved the service to Dover. The 81m Incat fastcraft used on the service lost their 'Holyman' prefix and became the Rapide and Diamant. Sea Containers later bought Holyman's share in the operation and the fastcraft. Folkestone – Boulogne (1993–2000; Seacat service): after transferring the Boulogne route to Folkestone, Hoverspeed used Hoverspeed Great Britain and kept the hovercraft at Dover. In 2000 the Hoverspeed service ended and in 2001 the port at Folkestone closed to all ferry traffic after the termination of the Falcon Marfreight service. Newhaven – Dieppe (1999–2004; Superseacat and Seacat service): Hoverspeed took over the high-speed passenger service after P&O Stena Line's withdrawal the year before. The route was continued using SuperSeaCat One. Hoverspeed's last service on the route was at the end of the 2004 summer season. Despite taking bookings for 2005 they withdrew from the route: a deal that would have seen Transmanche Ferries charter Superseacat One for the route fell through. Portsmouth – Cherbourg (1990–1991): Hoverspeed's first Seacat route using the Hoverspeed Great Britain; this was closed on 6 January 1991 after a series of technical problems throughout the operating season. Hoverspeed briefly ran services from Ramsgate Pegwell Bay Hoverport in 1982 which had been the base of Hoverlloyd hovercraft services. Fleet Hovercraft The Princess Anne (built 1968): built by British Hovercraft Corporation for Seaspeed and stretched in 1977. Transferred to Hoverspeed on creation of the company in 1981. Withdrawn in 2000. Currently at the Hovercraft Museum at HMS Daedalus. The Princess Margaret (built 1968): built by British Hovercraft Corporation for Seaspeed and stretched in 1977. Transferred to Hoverspeed on creation of the company in 1981. Withdrawn in 2000. Scrapped at the Hovercraft Museum at HMS Daedalus in 2018. Swift (built 1969): built by British Hovercraft Corporation for Hoverlloyd. Transferred to Hoverspeed on creation of the company in 1981. Withdrawn in 1991. Scrapped in 2004. Sure (built 1969): built by British Hovercraft Corporation for Hoverlloyd. Transferred to Hoverspeed on creation of the company in 1981. Withdrawn in 1983. Scrapped at Pegwell Bay between 1983 and 1987. Sir Christopher (built 1972): built for by British Hovercraft Corporation for Hoverlloyd. Transferred to Hoverspeed on creation of the company in 1981. Scrapped in 1998. The Prince Of Wales (built 1977): built by British Hovercraft Corporation for Hoverlloyd. Transferred to Hoverspeed on creation of the company in 1981. Scrapped after an electrical fire in 1993. Ingénieur Jean Bertin (built 1977): a SEDAM N500 Naviplane built for SNCF by SEDAM for use on the Seaspeed route. Transferred to Hoverspeed in 1983 after a number of modifications, but was returned to the SNCF later that year after Hoverspeed decided that she was not suitable for their services. Scrapped in 1985 at the Boulogne Hoverport. Incat 74 metre (243') catamarans Hoverspeed Great Britain (Incat hull 025, built 1990). A former holder of the Hales Trophy for fastest crossing of the Atlantic Ocean, it initially entered service on Portsmouth – Cherbourg but later transferred to Dover – Calais and Boulogne and then to Folkestone – Boulogne. It was chartered to Emeraude Ferries in 2004 and renamed Emeraude GB. In 2005 it was chartered to Aegean Speed Lines in Greece, running from Piraeus (Athens) – Folegandros as Speedrunner 1. It was in service with Hoverspeed from 1990–1991, 1993–2000 and 2002–2003. It left the fleet in 2003 and was laid up. Hoverspeed France (Incat hull 026, built 1991) operated for Hoverspeed under several names. It was renamed Seacat Boulogne in 1993, transferred to the Isle of Man Steam Packet Company (IOMSPC) as Seacat Isle of Man in 1994 but returned under that name to Hoverspeed in 1996 and 1997. In 1996 it was renamed Seacat Norge and chartered to ColorSeacat, a joint venture between Sea Containers and Color Line. Renamed Seacat Isle of Man again in 1997, it had its final season with Hoverspeed on the Dover – Calais route in 1998 before returning to IOMSPC as Seacat Isle of Man. In 2005 it was chartered to Irish Sea Express and renamed Sea Express 1. The charter ended later that year and it reverted to Steam Packet service. On 3 February 2007 it collided with the Alaska Rainbow in thick fog off in the River Mersey, sustaining severe damage to her starboard hull. Over 2007 it was laid up and later repaired. It was renamed Snaefell in December 2007 and prepared for service in 2008 for the Steam Packet. It is currently operating for Seajets in Greece as HSC Caldera Vista. Hoverspeed Belgium (Incat Hull 027, built 1991) was renamed Hoverspeed Boulogne before entering service in 1992. In 1993 it was transferred to Sea Containers route from Frederikshavn to Gothenburg as Seacatamaran Danmark, shortened to Seacat Danmark in 1994. Until 1997 it provided cover on various Hoverspeed and Sea Containers ferry routes but remained on Dover – Calais between 2000 and 2003. It was transferred to SNAV in a joint venture with Sea Containers initially as Pescara Jet but is now named Zara Jet. Seacat Tasmania (Incat Hull 023, built 1990) operated for Hoverspeed in 1992 on the Folkestone – Boulogne route and in 1993 on Dover – Calais along with the Folkestone service. Renamed Seacat Calais in mid-1993 before being chartered out in late 1993, it returned to Hoverspeed in 1999 and ran on the Dover-Calais service. In 2000 it moved to the SNAV/Sea Containers joint venture as Croazia Jet before again returning to Hoverspeed in 2002 as Seacat France. It was chartered by Emeraude Ferries as Emeraude France for two months in 2005. It was sold to an unknown buyer for a reported US$2 million in February 2007. It is now operating in Greece for Seajets as Sea Speed Jet. Seacat Scotland (Incat hull 028, built 1992) was built for Sea Containers's Stranraer - Belfast SeaCat service, which in 2000 changed to Belfast - Troon before closing altogether on Monday 1 November 2004. SeaCat Scotland left Belfast on Thursday 28 November 2002 at 0600. It operated for Hoverspeed on the Dover - Calais route in 1992, 2003 and 2004. It is now operating in Greece for Seajets as HSC Cyclades Express. Fincantieri MDV1200 Superseacats Four Superseacats were ordered by Hoverspeed's parent company Sea Containers after the original Superseacat design built by Austal failed to meet the speed required. Superseacat One (built 1997). Entered service in 1997. After operating on Sea Containers Frederikshavn to Gothenburg route the vessel entered service with Hoverspeed in 2000 on the Newhaven - Dieppe route. In 2001 she moved to operate with two of her sisters on the Dover - Calais and Ostend routes. She returned to the Dieppe route in 2002 where she remained until the end of the 2004 summer season when the Hoverspeed service from the port ended. In service with Hoverspeed 1999–2004. Left fleet in 2004, chartered. Superseacat Two (built 1997). Entered service in 1997. Entered service with Hoverspeed in June 1997 on the Dover – Calais service where she remained until March 1998. She moved to the Sea Containers/Isle of Man Steam Packet Company (IOMSPC) Liverpool – Dublin service. She was replaced in that service by her sister Superseacat Three in 1999 so she moved back to Hoverspeed for the Newhaven - Dieppe service for a season. She returned to with Hoverspeed in 2000 on the Newhaven - Dieppe route. In 2001 she returned to Dover to operate with two of her sisters on the Dover – Calais and Ostend routes. At somepoint she operated on the Heysham – Belfast route before returning again to the Dieppe route in 2003. In serviced with Hoverspeed 1997, 1999 and 2001. Left fleet in 2001, sold. Superseacat Three (built 1999). Entered service on the Sea Containers/Isle of Man Steam Packet Company Liverpool – Dublin service in 1999 before moving to Hoverspeed to Dover to operate with two of her sisters on the Dover – Calais and Ostend routes. She was transferred to the Baltic Sea operations of Silja Line but was not included in the 2006 sale of the company to Tallink. In service with Hoverspeed 2001. Left fleet in 2001, returned to SeaCat. Incat 81 metre (265') catamarans Diamant (Incat hull 041, built 1996). Purchased by Holyman before completion in 1996 she was launched as Holyman Express but was renamed Holyman Diamant in 1997 as a result of a competition to decide the names of the fast ferries for use on the Holyman Sally service between Ramsgate and Ostend. In 1998 Holyman Hoverspeed was formed and the vessel's name was shortened to Diamant. Her UK base of operations was moved to Dover and she received Seacat livery. Hoverspeed eventually purchased Holyman's share and Diamant came under Sea Containers ownership. She remained on the Dover – Ostend route until 2001 where she operated on the Newhaven – Dieppe route for the summer season. In 2002 she briefly operated for IOMSPC before commencing her last summer on Newhaven – Dieppe. In 2004 she transferred to Dover – Calais where she remained until the closure of the service in November 2005 . For her last two seasons in Hoverspeed service she wore a distinctive Union Flag livery in an attempt to show how British Hoverspeed were, Diamant was built in Australia, owned by a Bermuda registered company and the vessel itself was registered in Nassau, Bahamas and later moved to the Italian Registry as Seacat Diamant. Along with her sister she has been reported as sold to Balearia. She left Liverpool (where she was chartered to the Isle of Man Steam Packet Company to provide additional capacity for the 2006 TT races period) for Oporto, Portugal (unlike Rapide which sailed to Algeciras) on 20 June 2006, two days after her sister left Tilbury. As of July 2006, she was chartered to Balearia Eurolineas of Spain and renamed Jaume III. Rapide (Incat hull 038, built 1996). Built for Holyman as Condor 12 she initially operated for Condor Ferries which at the time was part owned by Holyman. She was replaced by a larger fastcraft in 1997 and moved to Holyman Sally's Ramsgate - Ostend service. As was the case with Diamant she was renamed Holyman Rapide as a result of a competition. In 1998 she was transferred to Holyman Hoverspeed along with her sister and the vessel's name was shortened to Rapide. She briefly returned to Condor Ferries to cover for the refit of Condor Express and operated between Poole and the Channel Islands for one month. Along with the Diamant, Rapide came under Sea Containers ownership. She remained on the Dover – Ostend route until 2001 when she moved to the Sea Containers/IOMSPC Liverpool – Dublin service. In 2002 she moved to Heysham – Belfast and eventually replaced Seacat Scotland on the Belfast – Troon service in 2003 until the end of the 2004 season. In 2005 she transferred to Dover – Calais alongside her sister where she remained until the closure of the service in the same year . She was renamed Seacat Rapide upon her transfer to the Italian register. She also received the Hoverspeed Union Flag livery. On 18 June 2006 she sailed from her lay-up spot at Tilbury, London to Algeciras. She has now been sold to Baleària Eurolineas and renamed Jaume II. Closure After sustaining losses annually since 1995 (the last financial year the company returned a pre-tax profit) it was clear, by the early 2000s, Hoverspeed could no longer continue operating business-as-usual. Competition from the Channel Tunnel Despite the introduction of the Super SeaCats, Hoverspeed could not match EuroTunnel both in terms of economies of scale and pricing. This became apparent in the company's pre-tax profits immediately following the opening of the Channel Tunnel in May 1994. End of Duty Free-Sales Lucrative duty-free sales came to an end in July 1999 when the European Union removed tariffs between members states. Consequently, Sea Containers announced it refused to support Hoverspeed’s losses on the English Channel and that they would cease operations on the Dover-Calais route, ending over 40 years of service. Company wind-down and Sale of Assets It was initially thought that both the 81m Seacats would move to Sea Containers Mediterranean services. However in March 2006 both the Seacat Rapide and Seacat Diamant were put up for sale by Sea Containers. Superseacat One which had operated for Hoverspeed on its now-closed Newhaven – Dieppe (2000 and 2002–2004) and Dover – Calais / Ostend (2001) fast ferry services was sold in April 2006 to Acciona Trasmediterránea and was renamed Almudaina Dos. Although the company ended Hovercraft service, they still retained ownership of the remaining 2 SR.N4s until 2006 when they were sold to Wensley Haydon-Baillie. In 2016, following a transfer of ownership of both land and the craft to the Home & Communities Agency, a public campaign was launched to save one or both of the craft. By Summer 2016, it was established that The Princess Anne would be saved and refurbished as a permanent exhibit at the Hovercraft Museum. The fate of The Princess Margaret, which was not in such sound structural repair, was announced on 30 May 2018 - usable parts will be moved to The Princess Anne and the craft will be scrapped. The closure of Hoverspeed left a single company in the United Kingdom still operating hovercraft flights, Hovertravel. Hoverports Dover Hoverport Since the closure of Hoverspeed in 2005, the Dover Hoverport had remained unused until SpeedFerries moved to the site from the Eastern Docks. They operated from the hoverport to Boulogne from Easter 2007 until November 2008 when SpeedFerries went into administration and their services ceased. The hoverport site then lay abandoned for the second time in its thirty-year history until demolition of the site began in May 2009 in preparation for the redevelopment of the Western Docks and a new Cruise Terminal. Calais and Boulogne Hoverports Following expansion of Calais port in January 2016, Calais Hoverport was subsequently demolished, only Boulogne (Le Portel) hoverport site remains, albeit long abandoned and derelict. References Notes Bibliography Transport companies established in 1981 Transport companies disestablished in 2005 Connections across the English Channel Hovercraft Defunct shipping companies of the United Kingdom 1981 establishments in England 2005 disestablishments in England
58900
https://en.wikipedia.org/wiki/Unmanned%20aerial%20vehicle
Unmanned aerial vehicle
An unmanned aerial vehicle (UAV), commonly known as a drone, is an aircraft without any human pilot, crew, or passengers on board. UAVs are a component of an unmanned aircraft system (UAS), which includes adding a ground-based controller and a system of communications with the UAV. The flight of UAVs may operate under remote control by a human operator, as remotely-piloted aircraft (RPA), or with various degrees of autonomy, such as autopilot assistance, up to fully autonomous aircraft that have no provision for human intervention. UAVs were originally developed through the twentieth century for military missions too "dull, dirty or dangerous" for humans, and by the twenty-first, they had become essential assets to most militaries. As control technologies improved and costs fell, their use expanded to many non-military applications. These include aerial photography, product deliveries, agriculture, policing and surveillance, infrastructure inspections, science, smuggling, and drone racing. Terminology Many terms are used for aircraft which fly without any persons on board. The term drone has been used from the early days of aviation, being applied to remotely-flown target aircraft used for practice firing of a battleship's guns, such as the 1920s Fairey Queen and 1930s de Havilland Queen Bee. Later examples included the Airspeed Queen Wasp and Miles Queen Martinet, before ultimate replacement by the GAF Jindivik. The term remains in common use. An unmanned aerial vehicle (UAV) is defined as a "powered, aerial vehicle that does not carry a human operator, uses aerodynamic forces to provide vehicle lift, can fly autonomously or be piloted remotely, can be expendable or recoverable, and can carry a lethal or nonlethal payload". UAV is a term that is commonly applied to military use cases. However missiles with warheads are not considered UAVs because the vehicle itself is a munition. The term unmanned aircraft system (UAS) was adopted by the United States Department of Defense (DoD) and the United States Federal Aviation Administration (FAA) in 2005 according to their Unmanned Aircraft System Roadmap 2005–2030. The International Civil Aviation Organization (ICAO) and the British Civil Aviation Authority adopted this term, also used in the European Union's Single-European-Sky (SES) Air-Traffic-Management (ATM) Research (SESAR Joint Undertaking) roadmap for 2020. This term emphasizes the importance of elements other than the aircraft. It includes elements such as ground control stations, data links and other support equipment. A similar term is an unmanned-aircraft vehicle system (UAVS), remotely piloted aerial vehicle (RPAV), remotely piloted aircraft system (RPAS). Many similar terms are in use. "Unoccupied" and "uninhabited" are occasionally used as alternatives to "unmanned". In addition to the software, autonomous drones also employ a host of advanced technologies that allow them to carry out their missions without human intervention, such as cloud computing, computer vision, artificial intelligence, machine learning, deep learning, and thermal sensors. Under new regulations which came into effect 1 June 2019, the term RPAS (Remotely Piloted Aircraft System) has been adopted by the Canadian Government to mean "a set of configurable elements consisting of a remotely piloted aircraft, its control station, the command and control links and any other system elements required during flight operation". The relation of UAVs to remote controlled model aircraft is unclear. UAVs may or may not include model aircraft. Some jurisdictions base their definition on size or weight; however, the US FAA defines any uncrewed flying craft as a UAV regardless of size. For recreational uses, a drone (as opposed to a UAV) is a model aircraft that has first-person video, autonomous capabilities, or both. Classifications UAVs may be classified like any other aircraft, according to design configuration such as weight or engine type, maximum flight altitude, degree of operational autonomy, operational role, etc. Based on the weight Based on their weight, drones can be classified into five categories — nano (weighing up to 250 g), Micro air vehicles (MAV) (250 g - 2 kg), Miniature UAV or small (SUAV) (2-25 kg), medium (25-150 kg), and large (over 150 kg). Based on the degree of autonomy Drones could also be classified based on the degree of autonomy in their flight operations. ICAO classifies uncrewed aircraft as either remotely piloted aircraft or fully autonomous. Some UAVs offer intermediate degrees of autonomy. For example, a vehicle that is remotely piloted in most contexts but has an autonomous return-to-base operation. Some aircraft types may optionally fly manned or as UAVs, which may include manned aircraft transformed into uncrewed or Optionally Piloted UAVs (OPVs). Based on the altitude Based on the altitude, the following UAV classifications have been used at industry events such as ParcAberporth Unmanned Systems forum: Hand-held altitude, about 2 km range Close altitude, up to 10 km range NATO type altitude, up to 50 km range Tactical altitude, about 160 km range MALE (medium altitude, long endurance) up to and range over 200 km HALE (high altitude, long endurance) over and indefinite range Hypersonic high-speed, supersonic (Mach 1–5) or hypersonic (Mach 5+) or suborbital altitude, range over 200 km Orbital low Earth orbit (Mach 25+) CIS Lunar Earth-Moon transfer Computer Assisted Carrier Guidance System (CACGS) for UAVs Based on the composite criteria An example of classification based on the composite criteria is U.S. Military's unmanned aerial systems (UAS) classification of UAVs based on weight, maximum altitude and speed of the UAV component. History Early drones The earliest recorded use of an unmanned aerial vehicle for warfighting occurred in July 1849, with a balloon carrier (the precursor to the aircraft carrier) in the first offensive use of air power in naval aviation. Austrian forces besieging Venice attempted to launch some 200 incendiary balloons at the besieged city. The balloons were launched mainly from land; however, some were also launched from the Austrian ship . At least one bomb fell in the city; however, due to the wind changing after launch, most of the balloons missed their target, and some drifted back over Austrian lines and the launching ship Vulcano. The Spanish engineer Leonardo Torres y Quevedo introduced a radio-based control-system called the "Telekino" at the Paris Academy of Science in 1903 with the intention of testing an airship of his own design without risking human lives. Significant development of drones started in the 1900s, and originally focused on providing practice targets for training military personnel. The earliest attempt at a powered UAV was A. M. Low's "Aerial Target" in 1916. Low confirmed that Geoffrey de Havilland's monoplane was the one that flew under control on 21 March 1917 using his radio system. Following this successful demonstration in the spring of 1917 Low was transferred to develop aircraft controlled fast motor launches D.C.B.s with the Royal Navy in 1918 intended to attack shipping and port installations and he also assisted Wing Commander Brock in preparations for the Zeebrugge Raid. Other British unmanned developments followed, leading to the fleet of over 400 de Havilland 82 Queen Bee aerial targets that went into service in 1935. Nikola Tesla described a fleet of uncrewed aerial combat vehicles in 1915. These developments also inspired the construction of the Kettering Bug by Charles Kettering from Dayton, Ohio and the Hewitt-Sperry Automatic Airplane - initially meant as an uncrewed plane that would carry an explosive payload to a predetermined target. Development continued during World War I, when the Dayton-Wright Airplane Company invented a pilotless aerial torpedo that would explode at a preset time. The film star and model-airplane enthusiast Reginald Denny developed the first scaled remote piloted vehicle in 1935. Soviet researchers experimented with controlling Tupolev TB-1 bombers remotely in the late 1930s. World War II In 1940 Denny started the Radioplane Company and more models emerged during World War II used both to train antiaircraft gunners and to fly attack-missions. Nazi Germany produced and used various UAV aircraft during the war, like the Argus As 292 and the V-1 flying bomb with a jet engine. Postwar period After World War II development continued in vehicles such as the American JB-4 (using television/radio-command guidance), the Australian GAF Jindivik and Teledyne Ryan Firebee I of 1951, while companies like Beechcraft offered their Model 1001 for the U.S. Navy in 1955. Nevertheless, they were little more than remote-controlled airplanes until the Vietnam War. In 1959 the U.S. Air Force, concerned about losing pilots over hostile territory, began planning for the use of uncrewed aircraft. Planning intensified after the Soviet Union shot down a U-2 in 1960. Within days, a highly-classified UAV program started under the code name of "Red Wagon". The August 1964 clash in the Tonkin Gulf between naval units of the U.S. and the North Vietnamese Navy initiated America's highly-classified UAVs (Ryan Model 147, Ryan AQM-91 Firefly, Lockheed D-21) into their first combat missions of the Vietnam War. When the Chinese government showed photographs of downed U.S. UAVs via Wide World Photos, the official U.S. response was "no comment". During the War of Attrition (1967–1970) in the Middle East, Israeli intelligence tested the first tactical UAVs installed with reconnaissance cameras, which successfully returned photos from across the Suez Canal. This was the first time that tactical UAVs that could be launched and landed on any short runway (unlike the heavier jet-based UAVs) were developed and tested in battle. In the 1973 Yom Kippur War, Israel used UAVs as decoys to spur opposing forces into wasting expensive anti-aircraft missiles. After the 1973 Yom Kippur war, a few key people from the team that developed this early UAV joined a small startup company that aimed to develop UAVs into a commercial product, eventually purchased by Tadiran and leading to the development of the first Israeli UAV. In 1973 the U.S. military officially confirmed that they had been using UAVs in Southeast Asia (Vietnam). Over 5,000 U.S. airmen had been killed and over 1,000 more were missing or captured. The USAF 100th Strategic Reconnaissance Wing flew about 3,435 UAV missions during the war at a cost of about 554 UAVs lost to all causes. In the words of USAF General George S. Brown, Commander, Air Force Systems Command, in 1972, "The only reason we need (UAVs) is that we don't want to needlessly expend the man in the cockpit." Later that year, General John C. Meyer, Commander in Chief, Strategic Air Command, stated, "we let the drone do the high-risk flying ... the loss rate is high, but we are willing to risk more of them ...they save lives!" During the 1973 Yom Kippur War, Soviet-supplied surface-to-air missile-batteries in Egypt and Syria caused heavy damage to Israeli fighter jets. As a result, Israel developed the IAI Scout as the first UAV with real-time surveillance. The images and radar decoys provided by these UAVs helped Israel to completely neutralize the Syrian air defenses at the start of the 1982 Lebanon War, resulting in no pilots downed. In Israel in 1987, UAVs were first used as proof-of-concept of super-agility, post-stall controlled flight in combat-flight simulations that involved tailless, stealth-technology-based, three-dimensional thrust vectoring flight-control, and jet-steering. Modern UAVs With the maturing and miniaturization of applicable technologies in the 1980s and 1990s, interest in UAVs grew within the higher echelons of the U.S. military. In the 1990s, the U.S. DoD gave a contract to AAI Corporation along with Israeli company Malat. The U.S. Navy bought the AAI Pioneer UAV that AAI and Malat developed jointly. Many of these UAVs saw service in the 1991 Gulf War. UAVs demonstrated the possibility of cheaper, more capable fighting-machines, deployable without risk to aircrews. Initial generations primarily involved surveillance aircraft, but some carried armaments, such as the General Atomics MQ-1 Predator, that launched AGM-114 Hellfire air-to-ground missiles. CAPECON, a European Union project to develop UAVs, ran from 1 May 2002 to 31 December 2005. the United States Air Force (USAF) employed 7,494 UAVs almost one in three USAF aircraft. The Central Intelligence Agency also operated UAVs. By 2013 at least 50 countries used UAVs. China, Iran, Israel, Pakistan, Turkey, and others designed and built their own varieties. The use of drones has continued to increase. Due to their wide proliferation, no comprehensive list of UAV systems exists. The development of smart technologies and improved electrical-power systems led to a parallel increase in the use of drones for consumer and general aviation activities. As of 2021, quadcopter drones exemplify the widespread popularity of hobby radio-controlled aircraft and toys, however the use of UAVs in commercial and general aviation is limited by a lack of autonomy and by new regulatory environments which require line-of-sight contact with the pilot. In 2020 a Kargu 2 drone hunted down and attacked a human target in Libya, according to a report from the UN Security Council’s Panel of Experts on Libya, published in March 2021. This may have been the first time an autonomous killer-robot armed with lethal weaponry attacked human beings. Superior drone technology played a role in Azerbaijan's successes in the 2020 Nagorno-Karabakh war against Armenia. Design Crewed and uncrewed aircraft of the same type generally have recognizably similar physical components. The main exceptions are the cockpit and environmental control system or life support systems. Some UAVs carry payloads (such as a camera) that weigh considerably less than an adult human, and as a result, can be considerably smaller. Though they carry heavy payloads, weaponized military UAVs are lighter than their crewed counterparts with comparable armaments. Small civilian UAVs have no life-critical systems, and can thus be built out of lighter but less sturdy materials and shapes, and can use less robustly tested electronic control systems. For small UAVs, the quadcopter design has become popular, though this layout is rarely used for crewed aircraft. Miniaturization means that less-powerful propulsion technologies can be used that are not feasible for crewed aircraft, such as small electric motors and batteries. Control systems for UAVs are often different than crewed craft. For remote human control, a camera and video link almost always replace the cockpit windows; radio-transmitted digital commands replace physical cockpit controls. Autopilot software is used on both crewed and uncrewed aircraft, with varying feature sets. Aircraft configuration The primary difference from manned aeroplanes is the lack of need for a cockpit area and its windows. However some types are adapted from piloted examples, or are designed for optional piloted or unmanned operational modes. Air safety is also less of a critical requirement for unmanned aircraft, allowing the designer greater freedom to experiment. These two factors have led to a great variety of airframe and engine configurations in UAVs. For conventional flight the flying wing and blended wing body offer light weight combined with low drag and stealth, and are popular configurations. Larger types which carry a variable payload are more likely to feature a distinct fuselage with a tail for stability, control and trim, although the wing configurations in use vary widely. For vertical flight, the tailless quadcopter requires a relatively simple control system and is common for smaller UAVs. However the mechanism does not scale well to larger aircraft, which tend to use a conventional single rotor with collective and cyclic pitch control, along with a stabilising tail rotor. Propulsion Traditional internal combustion and jet engines remain in use for drones requiring long range. However for shorter-range missions electric power has almost entirely taken over. The distance record for a UAV (built from balsa wood and mylar skin) across the North Atlantic Ocean is held by a gasoline model airplane or UAV. Manard Hill "in 2003 when one of his creations flew 1,882 miles across the Atlantic Ocean on less than a gallon of fuel" holds this record. Besides the traditional piston engine, the Wankel rotary engine is used by some drones. This type offers high power output for lower weight, with quieter and more vibration-free running. Claims have also been made for improved reliability and greater range. Small drones mostly use lithium-polymer batteries (Li-Po), while some larger vehicles have adopted the a hydrogen fuel cell. The energy density of modern Li-Po batteries is far less than gasoline or hydrogen. However electric motors are cheaper, lighter and quieter. Complex multi-engine, multi-propeller installations are under development with the goal of improving aerodynamic and propulsive efficiency. For such complex power installations, Battery elimination circuitry (BEC) may be used to centralize power distribution and minimize heating, under the control of a microcontroller unit (MCU). Ornithopters - wing propulsion Flapping-wing ornithopters, imitating birds or insects, have been flown as microUAVs. Their inherent stealth recommends them for spy missions. Sub-1g microUAVs inspired by flies, albeit using a power tether, have been able to "land" on vertical surfaces. Other projects mimic the flight of beetles and other insects. Computer control systems UAV computing capability followed the advances of computing technology, beginning with analog controls and evolving into microcontrollers, then system-on-a-chip (SOC) and single-board computers (SBC). System hardware for small UAVs is often called the flight controller (FC), flight controller board (FCB) or autopilot. Architecture Sensors Position and movement sensors give information about the aircraft state. Exteroceptive sensors deal with external information like distance measurements, while exproprioceptive ones correlate internal and external states. Non-cooperative sensors are able to detect targets autonomously so they are used for separation assurance and collision avoidance. Degrees of freedom (DOF) refers to both the amount and quality of sensors on board: 6 DOF implies 3-axis gyroscopes and accelerometers (a typical inertial measurement unit IMU), 9 DOF refers to an IMU plus a compass, 10 DOF adds a barometer and 11 DOF usually adds a GPS receiver. Actuators UAV actuators include digital electronic speed controllers (which control the RPM of the motors) linked to motors/engines and propellers, servomotors (for planes and helicopters mostly), weapons, payload actuators, LEDs and speakers. Software UAV software called the flight stack or autopilot. The purpose of the flight stack is to obtain data from sensors, control motors to ensure UAV stability, and facilitate ground control and mission planning communication. UAVs are real-time systems that require rapid response to changing sensor data. As a result, UAVs rely on single-board computers for their computational needs. Examples of such single-board computers include Raspberry Pis, Beagleboards, etc. shielded with NavIO, PXFMini, etc. or designed from scratch such as NuttX, preemptive-RT Linux, Xenomai, Orocos-Robot Operating System or DDS-ROS 2.0. Civil-use open-source stacks include: ArduCopter CrazyFlie KKMultiCopter MultiWii BaseFlight (forked from MultiWii) CleanFlight (forked from BaseFlight) BetaFlight (forked from CleanFlight) iNav (forked from CleanFlight) RaceFlight (forked from CleanFlight) OpenPilot dRonin (forked from OpenPilot) LibrePilot (forked from OpenPilot) TauLabs (forked from OpenPilot) Paparazzi PX4 autopilot DroneCode (Umbrella organization managing PX4 within the Linux Foundation) Due to the open-source nature of UAV software, they can be customized to fit specific applications. For example, researchers from the Technical University of Košice have replaced the default control algorithm of the PX4 autopilot. This flexibility and collaborative effort has led to a large number of different open-source stacks, some of which are forked from others, such as CleanFlight, which is forked from BaseFlight and from which three other stacks are forked from. Loop principles UAVs employ open-loop, closed-loop or hybrid control architectures. Open loop This type provides a positive control signal (faster, slower, left, right, up, down) without incorporating feedback from sensor data. Closed loop This type incorporates sensor feedback to adjust behavior (reduce speed to reflect tailwind, move to altitude 300 feet). The PID controller is common. Sometimes, feedforward is employed, transferring the need to close the loop further. Communications UAVs use a radio for control and exchange of video and other data. Early UAVs had only narrowband uplink. Downlinks came later. These bi-directional narrowband radio links carried command and control (C&C) and telemetry data about the status of aircraft systems to the remote operator. In most modern UAV applications, video transmission is required. So instead of having separate links for C&C, telemetry and video traffic, a broadband link is used to carry all types of data. These broadband links can leverage quality of service techniques and carry TCP/IP traffic that can be routed over the Internet. The radio signal from the operator side can be issued from either: Ground control – a human operating a radio transmitter/receiver, a smartphone, a tablet, a computer, or the original meaning of a military ground control station (GCS). Remote network system, such as satellite duplex data links for some military powers. Downstream digital video over mobile networks has also entered consumer markets, while direct UAV control uplink over the cellular mesh and LTE have been demonstrated and are in trials. Another aircraft, serving as a relay or mobile control station military manned-unmanned teaming (MUM-T). Modern networking standards have explicitly considered drones and therefore include optimizations. The 5G standard has mandated reduced user plane latency to 1ms while using ultra-reliable and low-latency communications. Autonomy The level of autonomy in UAVs varies widely. UAV manufacturers often build in specific autonomous operations, such as: Self-level: attitude stabilization on the pitch and roll axes. Altitude hold: The aircraft maintains its altitude using barometric pressure and/or GPS data. Hover/position hold: Keep level pitch and roll, stable yaw heading and altitude while maintaining position using GNSS or inertial sensors. Headless mode: Pitch control relative to the position of the pilot rather than relative to the vehicle's axes. Care-free: automatic roll and yaw control while moving horizontally Take-off and landing (using a variety of aircraft or ground-based sensors and systems; see also "autoland") Failsafe: automatic landing or return-to-home upon loss of control signal Return-to-home: Fly back to the point of takeoff (often gaining altitude first to avoid possible intervening obstructions such as trees or buildings). Follow-me: Maintain relative position to a moving pilot or other object using GNSS, image recognition or homing beacon. GPS waypoint navigation: Using GNSS to navigate to an intermediate location on a travel path. Orbit around an object: Similar to Follow-me but continuously circle a target. Pre-programmed aerobatics (such as rolls and loops) One approach to quantifying autonomous capabilities is based on OODA terminology, as suggested by a 2002 US Air Force Research Laboratory report, and used in the table on the right. Full autonomy is available for specific tasks, such as airborne refueling or ground-based battery switching. Other functions available or under development include; collective flight, real-time collision avoidance, wall following, corridor centring, simultaneous localization and mapping and swarming, cognitive radio and machine learning. Performance considerations Flight envelope UAVs can be programmed to perform aggressive maneuvers or landing/perching on inclined surfaces, and then to climb toward better communication spots. Some UAVs can control flight with varying flight modelisation, such as VTOL designs. UAVs can also implement perching on a flat vertical surface. Endurance UAV endurance is not constrained by the physiological capabilities of a human pilot. Because of their small size, low weight, low vibration and high power to weight ratio, Wankel rotary engines are used in many large UAVs. Their engine rotors cannot seize; the engine is not susceptible to shock-cooling during descent and it does not require an enriched fuel mixture for cooling at high power. These attributes reduce fuel usage, increasing range or payload. Proper drone cooling is essential for long-term drone endurance. Overheating and subsequent engine failure is the most common cause of drone failure. Hydrogen fuel cells, using hydrogen power, may be able to extend the endurance of small UAVs, up to several hours. Micro air vehicles endurance is so far best achieved with flapping-wing UAVs, followed by planes and multirotors standing last, due to lower Reynolds number. Solar-electric UAVs, a concept originally championed by the AstroFlight Sunrise in 1974, have achieved flight times of several weeks. Solar-powered atmospheric satellites ("atmosats") designed for operating at altitudes exceeding 20 km (12 miles, or 60,000 feet) for as long as five years could potentially perform duties more economically and with more versatility than low Earth orbit satellites. Likely applications include weather monitoring, disaster recovery, Earth imaging and communications. Electric UAVs powered by microwave power transmission or laser power beaming are other potential endurance solutions. Another application for a high endurance UAV would be to "stare" at a battlefield for a long interval (ARGUS-IS, Gorgon Stare, Integrated Sensor Is Structure) to record events that could then be played backwards to track battlefield activities. Reliability Reliability improvements target all aspects of UAV systems, using resilience engineering and fault tolerance techniques. Individual reliability covers robustness of flight controllers, to ensure safety without excessive redundancy to minimize cost and weight. Besides, dynamic assessment of flight envelope allows damage-resilient UAVs, using non-linear analysis with ad hoc designed loops or neural networks. UAV software liability is bending toward the design and certifications of crewed avionics software. Swarm resilience involves maintaining operational capabilities and reconfiguring tasks given unit failures. Applications In recent years, autonomous drones have begun to transform various application areas as they can fly beyond visual line of sight (BVLOS) while maximizing production, reducing costs and risks, ensuring site safety, security and regulatory compliance, and protecting the human workforce in times of a pandemic. They can also be used for consumer-related missions like package delivery, as demonstrated by Amazon Prime Air, and critical deliveries of health supplies. There are numerous civilian, commercial, military, and aerospace applications for UAVs. These include: General Recreation, Disaster relief, archeology, conservation of biodiversity and habitat, law enforcement, crime, and terrorism. Commercial Aerial surveillance, filmmaking, journalism, scientific research, surveying, cargo transport, mining, manufacturing, Forestry, solar farming, thermal energy, ports and agriculture. Warfare With extensive cost reductions and advancements in the UAVs technology, the defense forces around the globe are increasingly using these for various applications such as surveillance, logistics, communication, attack and combat. As of 2020, seventeen countries have armed UAVs, and more than 100 countries use UAVs in a military capacity. The global military UAV market is dominated by companies based in the United States, China, and Israel. By sale numbers, the US held over 60% military-market share in 2017. Four of top five military UAV manufactures are American including General Atomics, Lockheed Martin, Northrop Grumman and Boeing, followed by the Chinese company CASC. China has established and expanded its presence in military UAV market since 2010. Of the 18 countries that are known to have received military drones between 2010 to 2019, the top 12 all purchased their drones from China. Israeli companies mainly focus on small surveillance UAV systems and by quantity of drones, Israel exported 60.7% (2014) of UAV on the market while the United States export 23.9% (2014); top importers of military UAVs are the United Kingdom (33.9%) and India (13.2%). The US alone operated over 9,000 military UAVs in 2014. General Atomics is the dominant manufacturer with the Global Hawk and Predator/Mariner systems product-line. For intelligence and reconnaissance missions, the inherent stealth of micro UAV flapping-wing ornithopters, imitating birds or insects, offers potential for covert surveillance and makes them difficult targets to bring down. UAVs are used for reconnaissance, attack, demining, and target practice. Civil The civilian (commercial and general) drone market is dominated by Chinese companies. Chinese drone manufacturer DJI alone had 74% of the civil market share in 2018, with no other company accounting for more than 5%, and with $11 billion forecast global sales in 2020. Following increased scrutiny of its activities, the US Interior Department grounded its fleet of DJI drones in 2020, while the Justice Department prohibited the use of federal funds for the purchase of DJI and other foreign made UAVs. DJI is followed by Chinese company Yuneec, US company 3D Robotics and French company Parrot with a significant gap in market share. As of May 2021, 873,576 UAVs have been registered with the US FAA, of which 42% are categorized as commercial drones and 58% as recreational drones. 2018 NPD point to consumers increasingly purchasing drones with more advanced features with 33 percent growth in both the $500+ and $1000+ market segments. The civil UAV market is relatively new compared to the military one. Companies are emerging in both developed and developing nations at the same time. Many early stage startups have received support and funding from investors as is the case in the United States and by government agencies as is the case in India. Some universities offer research and training programs or degrees. Private entities also provide online and in-person training programs for both recreational and commercial UAV use. Consumer drones are also widely used by military organizations worldwide because of the cost-effective nature of consumer product. In 2018, Israeli military started to use DJI Mavic and Matrice series of UAV for light reconnaissance mission since the civil drones are easier to use and have higher reliability. DJI drones is also the most widely used commercial unmanned aerial system that the US Army has employed. DJI surveillance drones have also been used by Chinese police in Xinjiang since 2017. The global UAV market will reach US$21.47 billion, with the Indian market touching the US$885.7 million mark, by 2021. Lighted drones are beginning to be used in nighttime displays for artistic and advertising purposes. Aerial photography Drones are ideally suited to capturing aerial shots in photography and cinematography, and are widely used for this purpose. Small drones avoid the need for precise coordination between pilot and cameraman, with the same person taking on both roles. However, big drones with professional cine cameras, there is usually a drone pilot and a camera operator who controls camera angle and lens. For example, the AERIGON cinema drone which is used in film production in big blockbuster movies is operated by 2 people. Drones provide access to dangerous, remote or otherwise inaccessible sites. Agriculture and forestry As global demand for food production grows exponentially, resources are depleted, farmland is reduced, and agricultural labor is increasingly in short supply, there is an urgent need for more convenient and smarter agricultural solutions than traditional methods, and the agricultural drone and robotics industry is expected to make progress. Agricultural drones have been used in areas such as Africa to help build sustainable agriculture. The use of UAVs is also being investigated to help detect and fight wildfires, whether through observation or launching pyrotechnic devices to start backfires. Law enforcement Police can use drones for applications such as search and rescue and traffic monitoring. Safety and security Threats Nuisance UAVs can threaten airspace security in numerous ways, including unintentional collisions or other interference with other aircraft, deliberate attacks or by distracting pilots or flight controllers. The first incident of a drone-airplane collision occurred in mid-October 2017 in Quebec City, Canada. The first recorded instance of a drone collision with a hot air balloon occurred on 10 August 2018 in Driggs, Idaho, United States; although there was no significant damage to the balloon nor any injuries to its 3 occupants, the balloon pilot reported the incident to the National Transportation Safety Board, stating that "I hope this incident helps create a conversation of respect for nature, the airspace, and rules and regulations". Unauthorized UAV flights into or near major airports have prompted extended shutdowns of commercial flights. Drones caused significant disruption at Gatwick Airport during December 2018, needing the deployment of the British Army. In the United States, flying close to a wildfire is punishable by a maximum $25,000 fine. Nonetheless, in 2014 and 2015, firefighting air support in California was hindered on several occasions, including at the Lake Fire and the North Fire. In response, California legislators introduced a bill that would allow firefighters to disable UAVs which invaded restricted airspace. The FAA later required registration of most UAVs. Security vulnerabilities By 2017, drones were being used to drop contraband into prisons. The interest in UAVs cyber security has been raised greatly after the Predator UAV video stream hijacking incident in 2009, where Islamic militants used cheap, off-the-shelf equipment to stream video feeds from a UAV. Another risk is the possibility of hijacking or jamming a UAV in flight. Several security researchers have made public some vulnerabilities in commercial UAVs, in some cases even providing full source code or tools to reproduce their attacks. At a workshop on UAVs and privacy in October 2016, researchers from the Federal Trade Commission showed they were able to hack into three different consumer quadcopters and noted that UAV manufacturers can make their UAVs more secure by the basic security measures of encrypting the Wi-Fi signal and adding password protection. Aggression UAVs could be loaded with dangerous payloads, and crashed into vulnerable targets. Payloads could include explosives, chemical, radiological or biological hazards. UAVs with generally non-lethal payloads could possibly be hacked and put to malicious purposes. Anti-UAV systems are being developed by states to counter this threat. This is, however, proving difficult. As Dr J. Rogers stated in an interview to A&T "There is a big debate out there at the moment about what the best way is to counter these small UAVs, whether they are used by hobbyists causing a bit of a nuisance or in a more sinister manner by a terrorist actor". Countermeasures Counter unmanned air system The malicious use of UAVs has led to the development of counter unmanned air system (C-UAS) technologies such as the Aaronia AARTOS which have been installed on major international airports. Anti-aircraft missile systems such as the Iron Dome are also being enhanced with C-UAS technologies. Regulation Regulatory bodies around the world are developing unmanned aircraft system traffic management solutions to better integrate UAVs into airspace. The use of unmanned aerial vehicles (UAVs) or drones is becoming increasingly regulated by the national aviation authority of individual countries. Regulatory regimes can differ significantly according to drone size and use. The International Civil Aviation Organization (ICAO) began exploring the use of drone technology as far back as 2005, which resulted in a 2011 report. France was among the first countries to set a national framework based on this report and larger aviation bodies such as the FAA and the EASA quickly followed suit. In 2021, the FAA published a rule requiring all commercially-used UAVs and all UAVs regardless of intent weighing 250g or more to participate in Remote ID, which makes drone locations, controller locations, and other information public from takeoff to shutdown; this rule has since been challenged in the pending federal lawsuit RaceDayQuads v. FAA. Export controls The export of UAVs or technology capable of carrying a 500 kg payload at least 300 km is restricted in many countries by the Missile Technology Control Regime. See also List of unmanned aerial vehicles Delivery drone Drone in a Box International Aerial Robotics Competition List of films featuring drones Micromechanical Flying Insect ParcAberporth Quadcopter Radio-controlled aircraft Satellite Sentinel Project Tactical Control System UAV ground control station Unmanned underwater vehicle References Citations Bibliography Further reading Hill, J., & Rogers, A. (2014). The rise of the drones: From The Great War to Gaza. Vancouver Island University Arts & Humanities Colloquium Series. Rogers, A., & Hill, J. (2014). Unmanned: Drone warfare and global security. Between the Lines. External links How Intelligent Drones Are Shaping the Future of Warfare, Rolling Stone Magazine Wireless Avionics Robotics Emerging technologies Articles containing video clips
59629396
https://en.wikipedia.org/wiki/Jan%20Peters%20%28computer%20scientist%29
Jan Peters (computer scientist)
Jan Peters (born August 14, 1976) is a German computer scientist. He is Professor of Intelligent Autonomous Systems at Department of Computer Science of the Technische Universität Darmstadt. Peters is renowned for his research in machine learning and robotics. Life Jan Peters graduated from the University of Hagen in 2000 with a diplom in computer science and from Technical University of Munich in 2001 with a diploma in electrical engineering. From 2000 to 2001, he spent two semesters as visiting student at the National University of Singapore. He then studied at the University of Southern California where he earned a Master of Science degree in Computer Science and a Master of Science degree in Aerospace and Mechanical Engineering. He received his Ph.D. in Computer Science from the University of Southern California in 2007. During his Ph.D., he received a presidential fellowship and his Ph.D. thesis received the Dick Volz Runner-Up Award for the Best US PhD Thesis based both on thesis quality as well as post-graduation impact. From 2007 to 2021, he was head of the Robot Learning Group first (up to 2011) at the Max Planck Institute for Biological Cybernetics and later at the Max Planck Institute for Intelligent Systems (until 2021). Since 2011 he has been Head of the Intelligent Autonomous Systems Institute at the Technische Universität Darmstadt. In 2008, Nicholas Roy, Russ Tedrake, Jun Morimoto, and Jan Peters founded the IEEE Robotics and Automation Society's Technical Committee on Robot Learning. Awards For his contributions, he has received the Robotics & Automation Early Career Award, the highest ranked early career award of the Institute of Electrical and Electronics Engineers, and the Young Investigator Award of the International Neural Network Society. In addition, he received an ERC Starting Grant in 2014 as well as numerous best paper awards. He was appointed Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2019 "for contributions to robot learning of dexterous motor skills". Public appearances Peters was a speaker at TEDx RheinMain 2018. Publications Peters J., Vijayakumar S., Schaal S. (2005) Natural Actor-Critic. In: Gama J., Camacho R., Brazdil P.B., Jorge A.M., Torgo L. (eds) Machine Learning: ECML 2005. ECML 2005. Lecture Notes in Computer Science, vol 3720. Springer, Berlin, Heidelberg References External links His website at the Technische Universität Darmstadt Curriculum Vitae Fellow Members of the IEEE Living people Technische Universität Darmstadt faculty University of Hagen alumni Max Planck Society faculty Technical University of Munich alumni University of Southern California alumni 1976 births
60309669
https://en.wikipedia.org/wiki/Latin%20tenses
Latin tenses
Latin has six main tenses: three non-perfect tenses (the present, future, and imperfect) and three perfect tenses (the perfect, future perfect, and pluperfect). In technical language, the first three tenses are known as the tenses, while the three perfect tenses are known as . The two sets of tenses are made using different stems. For example, from the verb 'I do' the three non-perfect tenses are 'I do, I will do, I was doing', made with the stem , and the three perfect tenses are 'I did, I will have done, I had done', made with the stem . To these six main tenses can be added various periphrastic tenses, such as 'I am going to do'. Latin tenses do not have exact English equivalents, so that often the same tense can be translated in different ways depending on its context: for example, can be translated as 'I do', 'I am doing' or 'I did', and can be translated as 'I did' and 'I have done'. In some cases Latin makes a distinction which is not made in English: for example, imperfect and perfect both mean 'I was' in English, but they differ in Latin. In addition to these six tenses of the indicative mood, there are four tenses in the subjunctive mood: present, imperfect, perfect, and pluperfect (). Participles in Latin have three tenses (present, perfect, and future) and the imperative mood has two tenses (present and future). The infinitive has two main tenses (present and perfect) as well as a number of periphrastic tenses used in reported speech. Infectum tenses Overview The present subjunctive and future indicative in and other 3rd and 4th conjugation verbs are identical in the 1st person, but are different in the other persons: the present subjunctive has etc, while the future has . Some verbs (namely 1st and 2nd conjugation verbs and the verb 'I go') have a future tense ending in instead of , for example 'I will love', 'I will see', 'I will go'. Some perfect and pluperfect tenses can be shortened by omitting v: for . Present indicative The present tense of regular verbs is formed in different ways according to the conjugation of the verb. Thus 1st conjugation verbs in the first three persons singular end in , 2nd conjugation in , 3rd conjugation (or ), and 4th conjugation in . The passive forms of these are 1st conjugation , 2nd conjugation , 3rd conjugation (or ), and 4th conjugation . (See Latin conjugation.) Irregular verbs also have various formations in the present tense, for example, 'I am', 'I am able', 'I go', 'I want', 'I bring or bear'. There is no distinction of aspect in the present tense: faciō can mean 'I do (now)', 'I do (regularly), or 'I am doing'; that is, it can be perfective, habitual, or progressive in aspect. Current situation The present tense can refer to a current situation: (Cicero) 'the Senate understands this; the Consul sees it; yet this man is still alive' (Cicero) 'perhaps you are telling the truth' Habitual The present tense can be used for habitual actions: (Cicero) 'I suffer these things every day' General truths The present, as in English, can also describe a general truth: (Catullus) 'suns can set and return again' Perfective present It can also be used performatively to describe an event which takes place at the moment of speaking: (Cicero) 'I come now to the testimony of the Dorylensians' Historic present The present tense is often used in narrative in a historic sense, referring to a past event, especially when the writer is describing an exciting moment in the story. This is known as the 'historic present': (Caesar) 'he sees the enemy threatening ... he immediately seizes weapons from those next to him ...' According to Pinkster, the historic present is the most frequent tense used in narrative in both prose and poetry. It can replace not only the perfect tense, but also the imperfect tense: (Caesar) 'in the whole camp there is panic' (i.e. people were panicking) After 'while', the present indicative also has the meaning of an imperfect tense: (Ovid) 'while she was fleeing, her cloak () slipped from her back () and she left it behind' In Caesar when a verb is placed initially in the sentence, as in the first example above (), it is very frequently in the present tense. Another situation where the use of the historic present is frequent is in utterance verbs, such as 'they give a pledge' or 'they beg'. More than half the historic presents in Caesar are of this kind. Present with perfect continuous meaning The present can sometimes mean 'has been doing', referring to a situation that started in the past and is still continuing. In some sentences a length of time is given and the adverb 'now' is added: (Cicero) 'he has been living in Lilybaeum for many years now' (Cicero) 'he has been a Roman citizen for a long time now' The present tense can also be used in this meaning when combined with a temporal clause using : (Terence) 'I've been trembling and shivering ever since I caught sight of her' Sometimes the clause itself has the present tense: (Martial) 'ever since he has been (lit. 'is') mine, he has never harmed me with a mistress' (Cicero) 'I've been feeling completely out of touch ever since I've been in Formiae' Another idiom is the following using the conjunction : multī annī sunt cum in aere meō est (Cicero) 'he has owed me money for many years now' (lit. 'there are many years that he is in my bronze') Another idiom that can be mentioned is the phrase , which means 'it would take a long time' or 'it would be tedious'. It is frequently used by Cicero as well as other writers: (Nepos) 'it would be tedious to recount all the battles' Future indicative Future events or situations can be expressed using the future tense, which in 1st and 2nd conjugation verbs and 'I go' ends in , in most other verbs in . The future of 'I am' is , and the future of 'I am able' is . A future meaning can also be expressed using a periphrastic future such as 'I am going to do' (see below). There is no distinction in the future between perfective and imperfective aspect. Future event or situation The future tense can describe an event or a situation in the near or distant future: (Vitruvius) 'I will explain this in the next book' (Cicero) 'when I get there, I shall be expecting your letters every day' Future in subordinate clauses A difference between Latin and English is that in subordinate clauses such as 'if this happens in future', English uses the present tense, but Latin usually uses the future. (Cicero) 'I will tell you when I have some news' (lit. 'I will have') (Cato) 'if (at some future time) you eat it (i.e. cabbage) raw, dip it in vinegar' (Cicero) 'through him we shall easily get what we want' (lit. 'what we will want') Occasionally, however, a present tense can be used in the subordinate clause: (Sallust) 'if we win, everything will be safe for us' Polite requests The future can also be used for polite requests, as when Cicero sends greetings to his friend Atticus's wife and daughter: (Cicero) 'please give my greetings to Pilia and Attica' Imperfect indicative The imperfect indicative is usually formed with the endings (in 3rd and 4th conjugation verbs with ); 'I go' has . However, the imperfect indicative of 'I am' is and of 'I am able' it is . The first three persons of the imperfect passive end in . The imperfect indicative generally has an imperfective meaning and describes situations in the past. Often the imperfect can be translated into English as 'was doing', but sometimes the simple tense 'did' or expressions such as 'used to do', 'would do', 'kept doing', 'began to do', 'had been doing' are more appropriate. Situation at a particular time A common use of the imperfect is to describe a situation that already existed at a particular moment: (Livy) 'with a stick, which he was carrying in his hand, he drew a circle round the king' (Cicero) 'when I got there, the governor was taking a nap' (Petronius) 'when I got home, my soldier was lying in bed' (Cicero) '(Verres) came into the forum; his eyes were burning (with anger)' Often an expression such as 'then' or 'at that time' is added: (Livy) 'at that time (the time of his death) Mettius was fighting on horseback' (Livy) 'by this time it was already winter' Vivid description The use of the imperfect rather than the perfect can be used to make a scene more vivid, as with this sentence of Cicero's: (Cicero) 'a Roman citizen was being flogged with rods in the middle of the forum of Messana, judges' The passage is commented on by Aulus Gellius. He says that the use of rather than creates a 'drawn-out vivid description' (); that is to say, making it seem to the audience that the scene is taking place in front of them. So frequently in descriptions of battles, the imperfect is used to describe what was happening at a particular moment, as though seen through the eyes of an observer: (Caesar) 'at the same time the cavalrymen ... as they were returning to the camp, began running into the enemy who were coming towards them and once again began fleeing in another direction' 'Began doing' Another meaning is inceptive, describing a situation that began at a certain moment and continued indefinitely. Often in English it is translated with 'began': (Livy) 'after the rout began, some began rushing blindly into the water, others, while they were hesitating on the banks, were crushed' (Sallust) 'when he heard that some important people had come, he began to agitated with alarm' (Caesar) 'after Caesar arrived in Asia, he began hearing reports that Titus Ampius had been trying to steal money from the temple of Diana in Ephesus' Habitual use The imperfect tense can describe a situation that used to take place regularly or habitually: (Cicero) 'I used to listen to him a lot' But in sentences like the following, in which the verb has a quasi-negative meaning ('he didn't write as well as he spoke'), the perfect can be used: (Cicero) 'Hortensius used to speak better than he wrote' Iterative use Similar to the above is the iterative or 'frequentative' use of the imperfect, describing what something that kept on happening or which happened on an indefinite number of occasions: ([Caesar]) 'Pharnaces sent several embassies to Domitius ... (each time) Domitius would reply ...' Geographical description Sometimes the imperfect is used for description of the surroundings as they appeared at the time of the story: (Caesar) 'a very high mountain hung over (the road)' Unfinished action Another use is to describe an action that someone was intending to do, or about to do, but which never actually took place, or which was interrupted by another event: (Tacitus) 'he was on the point of leaving the Senate house' (Tacitus) 'he would have rushed into the embrace of his daughter, who was running towards him, if the bodyguards hadn't intervened and stood in the way of both of them' (Tacitus) 'after the soldiers of the 14th legion descended from the Alps, all the more rebellious men were for carrying the standards to Vienne; but they were checked by the consensus of the better men and the legion was transported across to Britain' Pluperfect continuous meaning When the imperfect tense is used with a length of time it means 'had done' or 'had been doing', referring to a situation which had been going on for some time and was still going on. The adverb 'by now' is sometimes added: (Livy) 'which they had been desiring for a long time now' (Nepos) '(the Carthaginians) had been in possession of Sicily for several years by this time' (Livy) 'Philip was nowhere in sight, and for several hours no messenger had arrived from him' (Ovid) 'he was living alone without a wife and for a long time he had lacked any partner in his bedroom' Epistolary tenses Sometimes in letters a writer imagines himself in the position of the recipient and uses an imperfect tense to describe a situation which for the writer himself is present: (Cicero) 'as a matter of fact I am writing (lit. 'I was writing') this to you while sitting there ' (Cicero) 'I think (lit. 'was thinking') I will be in my province by the 1st Sextilis (= August)' (Cicero) 'Brutus is expecting (lit. 'was expecting') a letter from you at the moment' Other tenses can also be used from the point of view of the reader, such as the pluperfect and the perfect in the example below: (Cicero) 'there was still no news of your coming to Italy when I sent Sextus Villius with this letter for you' Potential meaning ('would be') Sometimes the imperfect of is used with a potential meaning ('would be'): (Quintilian) 'teaching would be completely superfluous, if nature was sufficient' (Pliny) 'it would be very much in your interest, those of you who are fathers, if your sons could study here rather (than in another town)' Perfectum tenses Overview The active tenses of Latin verbs are made using a different stem from the present tense. For example, 'I lead' makes the perfect tense 'I led'; 'I do' makes 'I did'; 'I am' makes 'I was' and so on. The perfect tenses can be found in dictionaries; see Latin conjugation for some examples. In all verbs, perfect tenses have the same personal endings. For example, the six persons (I, you sg., he/she, we, you pl., they) of the perfect tense active in all verbs are shown by the endings . The passive and deponent tenses are made using the perfect participle of the verb, which is the fourth principal part given in dictionaries (for example '(having been) led' or 'having spoken') combined with various tenses of the verb . For these tenses see below. There are three indicative tenses, as well as two subjunctives and a perfect infinitive, as follows: Perfect indicative The perfect indicative active tense is the third principal part given in Latin dictionaries. In most verbs it uses a different stem from the present tense; for example, the perfect of 'I fall' is 'I fell, I have fallen'. The endings are different from those of other tenses, namely (or ). The perfect passive is usually made with the perfect participle combined with , e.g. 'I was sent, I have been sent', 'I was led, I have been led'. Some perfect tenses have an irregular stem, for example 'I am', 'I go', 'I bring, I bear', 'I raise, I remove'. The Latin perfect has a dual meaning. It can describe a present situation (e.g. 'he has died') or a past event (e.g. 'he died (last year)'). Past event The perfect most frequently narrates an event in the past. The usual translation is the simple English past tense with '-ed' or the equivalent: (Caesar) 'I came, I saw, I conquered' (Servius to Cicero) 'there I met Marcus Marcellus, and I spent that day there' (Caesar) 'all at the same time, they leapt down out of the ship' Present perfect meaning The perfect active can also be used like the English present perfect ('I have done'): (Cicero) 'I have left a horse and a mule for you at Brundisium' (Cicero) 'the epidemic has now gone away' (Cicero) 'I've written about these matters to the Senate' (Plautus) 'I have completed the task which I was given; now I'm hurrying home' (Plautus) 'this is the way things are ... I have lost hope' It can also be negative: (Cicero) 'I haven't yet quite made my mind up' (Cicero) 'he was due to pay the money on the 1st January, but he still hasn't paid it' Experiential perfect As with the English perfect, the Latin perfect can sometimes be used to relate experiences which have happened several times in the past: (Cicero) 'I have often seen public meetings shout out loud when the words fell aptly (i.e. with a striking rhythm)' (Cicero) 'as I've often told you, I am very fond of Appius' It can also be used with to describe what has always been the case: (Cicero) 'you have always loved me' (Cicero) 'he lives with me, and has always done so' Gnomic perfect Similar to this is the 'gnomic perfect', which states a general truth based on past experience: (Horace) 'a heap of bronze and gold has never taken away fevers from the body' (i.e. doesn't takes away) (Juvenal) 'no one has ever become totally shameless suddenly' Iterative action in a temporal or relative clause In sentences which mean 'whenever X occurs, Y occurs', referring to general time, the perfect tense is used for event X if it precedes event Y. In English the present tense is often used: (Cicero) 'while I am reading, I agree, but as soon as I have put the book down all that agreement slips away' (Cicero) 'whenever I come here, this very "doing nothing" delights me' In a past-time temporal clause The perfect tense is usually used in temporal clauses after 'after', 'when', 'as soon as', 'as soon as'. Here English often uses the pluperfect tense: (Sallust) 'after he (had) said this, he ordered the signal to be sounded' It is also used in a past-time relative clause referring to an anterior action where similarly English might use a pluperfect: (Cicero) 'he lost the army which he had received' Length of time The perfect, not the imperfect, is used when a situation is said to have lasted in the past for a certain length of time, but is now over. (The imperfect, however, with a length of time, is used for a situation which was still going on at the time referred to; see the examples above.) (Cicero) 'he lived for ninety years' (Seneca) 'Cassius drank water throughout his whole life' (Livy) 'but the peace with Alba did not last long' (Cicero) 'all the Consuls before you obeyed the Senate' However, the phrase with the perfect tense means 'long ago': (Cicero) 'I heard this long ago, judges; I am sure there is none of you who hasn't often heard it' (Plautus) 'the crime, I say, was committed long ago; it is old and ancient' Difference between eram and fuī In the verb 'I am', the imperfect tense and the perfect both mean 'I was', but in Latin there is usually a difference. As with other verbs, the perfect is usually used when the length of time is mentioned: (Livy) 'for a long time there was silence' (Cicero) 'for many years he was blind' But if the situation was still continuing at the time referred to, the imperfect is used: (Livy) 'the cavalry battle had been in doubt for a long time already (and was still in doubt)' The perfect is also used when the sentence describes an event rather than a state: (Livy) 'that year there were huge floods and the Tiber inundated the flat areas of the city' (Cicero) 'you were therefore there at Laeca's house that night, Catiline!' (i.e. you attended the meeting) Another use of the perfect is to describe a former state, emphasising that it is no longer in existence: (Petronius) 'I was once just like you are' (Livy) 'there used to be a statue of Attus to the left of the senate house' (Virgil) 'we have ceased to be Trojans; Troy is no more' However, if a time adverb such as 'once upon a time' is added, there is no need for the perfect tense and the imperfect is more usual: (Horace) 'once I was a fig-wood log, a useless piece of timber' (Horace) 'I am not the kind of man I was under the rule of good Cinara' The perfect is also used in sentences such as the following, which describe a permanent state, as opposed to the imperfect, which describes a temporary one: (Terence) 'my mother was a Samian; she was living in Rhodes (at that time)' (Caesar) 'among the Helvetians by far the noblest and the most wealthy was Orgetorix' According to Pinkster, the use of erat in these two examples would sound wrong. 'In both cases the reader would want to know "What happened next?"' For geographical description, on the other hand, erat is used, describing the landscape was it was at the time of the narrative: (Caesar) 'on that river there was a bridge' 'to the north there was a hill." The use of here would imply that there used to be a bridge, but that it has now gone. The perfect must also be used with adverbs such as 'once', 'twice', 'three times', which imply that the situation is now over: (Cicero) 'I have been in Bithynia twice' The perfect is also used for something which has always been (or never been) the case: (Livy) 'we have never been enemies, always allies' The adverb , when referring to a past period of time, can have either tense: (Nepos) 'on several occasions he was in charge of armies' (Ovid) 'often I was your judge, often your teacher' There are also some types of sentences where either tense may be used indifferently, for example when describing someone's name or character: / (Livy) 'his name was Manus' / 'his name was Dinomenes' / (Nepos) 'he was a hard-working general' / 'he was an excellent general' The equivalent of these two tenses, Spanish and both meaning 'I was', still exist in Spanish and Portuguese today. (See Spanish conjugation, Portuguese verb conjugation.) Meminī, ōdī, nōvī Certain verbs, of which the most common are 'I remember', 'I hate', and 'I know', are used in the perfect tense but have the meaning of a present tense: (Cicero) 'I remember being present' (Catullus) 'even if you have forgotten, yet the gods remember' (Catullus) 'I hate and I love' The future perfect and pluperfect of these verbs serve as the equivalent of a future or imperfect tense: 'I will remember', 'I remembered'. has an imperative 'remember!' There is also a subjunctive which can be used in a hortatory sense: (Petronius) 'let us remember the living (not the dead)!' The verb usually means 'I know': (Cicero) 'we don't know you, we don't know who you are, we have never seen you before' But sometimes the perfect has a past meaning, 'I became acquainted with' or 'I got to know': (Plautus) 'I am (the one you mentioned); but where on earth did you make my acquaintance or see me or ever converse with me?' The perfect of , 'I have grown accustomed', is also often used with a present meaning: (Caesar) 'this day generally makes the highest tides' Future perfect indicative The future perfect indicative ends in . The forms with a short -i- () were found in early Latin, but by the time of Catullus and Cicero, it seems that the future perfect had become confused in pronunciation with the perfect subjunctive, and the forms with long -i- were usual. In poetry either form could be used, according to metrical convenience. Independent use The future perfect is usually used in a sentence with 'if' or 'when' referring to future time, but it can sometimes be used on its own, as in the following sentences: (Cicero) 'Pomponia, you invite the women, and (meanwhile) I will have summoned the men' Sometimes both halves of a sentence (main clause and subordinate) can have the future perfect: (Cicero) 'whoever will have crushed (i.e. crushes) Antony, will have finished the war' A famous passage with a future perfect is the call of the eagle-bearer to his men when their boat reached the shore of Britain in 55 BC: (Caesar) 'Jump down, soldiers,' he said, 'unless you want to betray the eagle to the enemy. I will certainly have done my own duty for the republic and the commander!' There is also an idiom using the future perfect of , where the future perfect is almost equivalent to a command: (Livy) 'you must see to it what is due to that man' After sī and cum More frequently the future perfect tense is found after 'if' or 'when' in clauses referring to a future time. In such sentences English uses the present tense: (Livy) 'you will die, if you utter a sound!' (lit. 'if you will have uttered') (Catullus) 'then, when we have made many thousands (of kisses), we will muddle up the accounts' (Cicero) 'if anything happens, I'll let you know at once' (Cicero) 'as you sow (lit. will have sown), so shall you reap' Future perfect of and The future perfect of and has a simple future meaning: (Plautus) 'I'll remember, don't worry about that' (Cicero) 'I hate the man, and I always will' Pluperfect indicative Prior event The pluperfect can be used as in English to describe an event that had happened earlier than the time of the narrative: (Caesar) 'this nation had sent ambassadors to Caesar a few months previously' (Livy) 'he returned to Elatia by the same way he had come' Often, like the imperfect tense, the pluperfect can be used to describe the situation prevailing at a certain moment: (Curtius) 'the others had already departed; Clitus was going out last, without a light' (Livy) 'nor was Philip, who had arrived by now in Macedonia, preparing war less energetically' Iterative use in temporal clauses In subordinate clauses of the type 'whenever...', 'whoever...' etc. in past time the pluperfect indicative is used if the event precedes the event of the main clause. Usually in English the simple past is used: (Cicero) 'it was only whenever he saw a rose that he thought that spring was beginning' (Cicero) 'at the end of the journey, whenever he came to some town, he would be carried in the same litter straight into his bedroom' In later writers such as Livy, the pluperfect subjunctive is used in a similar context. Potential meaning ('would have') Sometimes in a conditional clause a pluperfect indicative can have the meaning of a potential pluperfect subjunctive ('would have'), when it refers to an event which very nearly took place, but did not: (Florus) 'the war would have been completely finished, if (Caesar) had been able to crush Pompey at Brundisium' Pluperfect of , , The pluperfect of , and has the meaning of an imperfect: (Caesar) 'they remembered how they had put up with a great shortage at Alesia' (Cicero) 'I hated this man even more than I hated Clodius himself' (Cicero) 'he did not know Catiline, since the latter was at that time governor of Africa' Perfect tenses made with Occasionally the beginnings can be seen of a perfect tense formed with habeo ('I have') and the perfect participle, which became the regular way of forming the perfect in French and Italian. According to Gildersleeve and Lodge, this form of the perfect 'is not a mere circumlocution for the Perfect, but lays particular stress on the maintenance of the result'. However, in some cases it can be translated simply as a perfect tense in English: (Cicero) 'As for Eros's accounts, although I haven't seen him in person, I have more or less learnt what they say from his letter' (Cicero) 'I have now thoroughly examined, learnt, and judged Clodius's mind' In later Latin this construction became more common, for example: (Gregory of Tours, 6th century) 'you have invited the Bishop, and we have scarcely four jars of wine left!' A variation with 'I hold or keep' is also sometimes found, but usually with emphasis on the idea of holding: (Cicero) 'an army of the Roman people is besieging Gnaeus Pompey, is keeping him fenced in (has fenced him in) with a ditch and wall, and preventing him from fleeing' The future perfect of this idiom is made with : (Apuleius) 'but I will have said enough about the letters if I add this one thing' A pluperfect can similarly be made using one of the three past tenses of : (Caesar) 'Caesar sent ahead all the cavalry which he had gathered together from the whole province' (Livy) 'a knife, which she had hidden / was keeping hidden under her clothing, she stabbed it in her heart' (Caesar) 'Caesar had placed Lentulus Marcellinus the quaestor in charge of those defences' (de Bello Hispaniensi) 'Pompeius had placed a camp in the mountains within sight of both towns' (de Bello Alexandrino) 'and so they drew up and manned with fighters all the ships which they had earlier got ready for sailing' Infinitives formed with and are also possible, again with stress on the maintenance of the result: (Cicero) '(Verres is said to have claimed that) he had divided up that three-year period of his Sicilian praetorship in such a way that...' (Quintilian) 'you confess that you had hidden the gold / were keeping it hidden' (Terence) 'I would promise you that, as soon as he got married, I split up with Pamphilus and was keeping him away from me' (Cicero) 'the people of Caunus also owe him money, but they say that they had already deposited a sum of money' Perfect passive and deponent tenses The perfect passive and deponent tenses are usually made from the perfect participle + the present, future, or imperfect tense of , as follows: Perfect passive indicative The perfect passive and perfect deponent can be used like an English perfect tense, describing a present state resulting from an earlier event: (Catullus) 'my girlfriend's pet sparrow died / is dead / has died' (Cicero) 'since the decree of the Senate hasn't yet been published in writing, I will explain it to you from memory' More often they describe an event in the past, like the English past simple tense: (Cicero) 'he died in the year before I became censor' (Cicero) 'where was Sextus Roscius murdered? – in Rome' The participle changes in gender and number to agree with the subject; for example, it can be plural or feminine: (Caesar) 'they did what they were ordered to do' (Caesar) 'there were two daughters; of these one was killed, the other captured' The auxiliary verb with these tenses usually follows the participle, but sometimes precedes. This often happens when the auxiliary follows a focussed word, a quantity word, or a conjunction: (Caesar) 'Domitius, while fleeing from the camp into the mountains, was killed by cavalrymen' (Livy) 'from there Quinctius set out for Corinth' Sometimes the auxiliary verb or is omitted. This style is often found in the historian Livy: (Livy) 'Titus and Arruns set out; Lucius Iunius Brutus was added to them as a travelling companion' Future perfect passive The future perfect deponent and passive is usually made with . A typical use is in conditional clauses and temporal clauses referring to the future: (Cicero) 'if he sets out (lit. will be having set out), I will let you know' (Nepos) 'enough will have been said, if I add this one thing' Pluperfect passive The pluperfect passive and pluperfect deponent are made with : (Cicero) 'on the day before I arrived in Athens he had departed for Mytilene' The future perfect and pluperfect tenses can also have the auxiliary before the participle, sometimes separated by other words: ([Caesar]) 'Cispius quickly reached the place to which he had been sent' (Caesar) 'several men from the legions had been left behind in the camp sick' Adjectival participle Not every perfect participle combined with is a perfect tense, however. Thus in the examples below, the participle does not refer to any event but is merely descriptive or adjectival: (Caesar) 'Gaul, taken as a whole, is (i.e., can be described as) divided into three parts' (Seneca the Elder) 'he was prepared (i.e. willing) to pay a ransom' Perfect passive tenses made with fuī and fueram Alongside the regular perfect passive tenses described in the previous section, there exists a second set of passive and deponent tenses made with and . These are referred to as 'double forms' by de Melo. In early Latin, they seem to be slightly more common in deponent verbs than in passive ones, though in later Latin this difference is not found. In classical Latin, although these tenses occur, they are only rarely used. In Plautus and Terence the perfect passive or deponent with occurs 25 times compared with 1383 of the regular forms, and the pluperfect indicative with 9 times compared with the regular pluperfect 11 times. In Cicero they are rarer still: the numbers of examples of the six tenses above are 1, 6, 5, 5, 5, 2 respectively. The history of the perfect with is different from the other tenses. For a long time it was rarely used. It remained rare in the Augustan period and does not occur at all in the travelogue of the pilgrim Egeria (4th century AD). Later, however, in the 7th-century Chronicle of Fredegar, it became more common. In modern Spanish and Portuguese, it is the regular way of forming the past tense passive (e.g. Spanish 'he was killed in the war', Portuguese 'it was built in 1982'). The pluperfect indicative with and future perfect with , on the other hand, were used more often in classical Latin: in the Augustan-period writers Hyginus and Vitruvius they even outnumber the normal tenses, and in the travelogue of the pilgrim Egeria (4th century AD), they completely replaced them. In the examples below, in cases where there is contrast of tenses, the fact that the verb with refers to an earlier situation than the verb with 'is clearly a factor in the choice of tense' (Woodcock). Often the correct nuance can be obtained by adding the word 'earlier' or 'previously'. In some cases, however, there is little difference in meaning from the ordinary perfect or pluperfect tense. For the double perfect infinitive, see #Perfect infinitive with fuisse below. Perfect indicative with The perfect passive or deponent tense with in some cases refers to an earlier time than the time of another event mentioned. Woodcock quotes the following example: (Gellius) 'Sophocles was born before Euripides (was born)' In the following examples, the double perfect refers to a situation which existed a long time earlier, before Ovid was exiled: (Ovid) 'you were moved at that time by the constancy of a long friendship, which began for me even before you were born' (Ovid) 'and as I was once accustomed (before I was exiled), I would perhaps have sat, one of a hundred men, as a judge of your words' However, according to de Melo it is not always possible to tell from the context whether the tense with refers to an anterior time or is merely a stylistic variation of an ordinary perfect passive. He contrasts the following two sentences, the first of which is made with and refers to a very recent time; the second is made with and may refer to a time earlier than the following verb but this is not certain (the speaker goes on to say that after sailing to Egypt he sailed round the most distant coasts, ): (Plautus) 'I came here on a boat; I am still feeling seasick' (Plautus) 'I (originally) sailed from here to Egypt' In the following examples, both from the same scene, the meaning of the double perfect seems to be the same as an ordinary perfect: (Plautus) 'what I swore that I would do, I have done' (Plautus) 'what you ordered, I have done' Similarly, the following two examples use different tenses, although the context is very similar and the meaning is the same: (Plautus) 'there's something which I almost forgot to say (earlier) in the house (i.e. before we left the house)' (Plautus) 'I forgot to tell you when we were inside just now' There is a difference, however, since only the form can be used in sentences like the following where the verb has a present perfect meaning: (Plautus) 'I don't know ... I've forgotten everything' In some cases, the perfect participle accompanied by is merely adjectival, and does not describe any particular event. Thus in the following example, according to the 19th-century grammarian Madvig, the words do not describe an event but the state in which the temple of Janus was in: (Livy) 'since Numa's reign the temple of Janus has been in a closed state only twice' The perfect indicative with is not used by Cicero except in the following example, where the participles are adjectival. It refers to a previous situation which has now changed: (Cicero) 'almost all the things which have now been included in the Arts were once dispersed and scattered' Often, especially from the Augustan period onwards, this tense had no particular anterior meaning but was a mere variation of the perfect passive with . De Melo cites the following example, where the second verb is obviously not anterior to the first: (Vitruvius) 'the pictures having been cut out were packed in wooden crates and were brought into the comitium' In the Vulgate Bible (4th century A.D.), just as with Cicero, the perfect indicative with is only very rarely used compared with the other double tenses. An example is the following: 'and after that day no one dared to ask him any questions any more' Perfect subjunctive with The perfect subjunctive with is more common than the perfect indicative with . In the Augustan-period writers Hyginus and Vitruvius nearly a third of perfect subjunctives are double ones, and in Egeria's writing (4th century AD) it completely replaced the perfect subjunctive with . (Cicero) 'a guilty man cannot be condemned unless he has first been accused' In the following examples, the perfect subjunctives with contrast with the ordinary perfect subjunctive tenses, and apparently refer to an earlier event: (Livy) 'whether this was noticed too late, or whether (before it was noticed) some trick was suspected, is unknown' (Livy) 'whether they did this of their own accord or whether it was because they already had instructions to do so is not certain' Future perfect with The following example, quoted by Woodcock, contrasts the two passive future perfect tenses. There is a clear difference in time between the two verbs: (Cicero) 'whatever has (first) proved acceptable to you will be acceptable to him' Pluperfect indicative with In the following examples, a distinction is made between an earlier situation, expressed by the pluperfect with , and a later situation, expressed by the ordinary pluperfect with : (Caesar) 'the bridge, which earlier on had been broken by a storm, had now almost been rebuilt' (Caesar) 'for by this time the corn which had earlier been sown inside the defence walls had been used up' (Livy) 'indeed a rebellion in Gaul and Liguria, which had arisen earlier on at the beginning of that year, had soon been suppressed without much effort' (Petronius) 'nor had that cook yet slipped my mind, who had (earlier on) forgotten to gut the piglet' In the following examples, the pluperfect with is used similarly to refer to an earlier situation which later changed, while the later situation is expressed by the perfect tense: (Cicero) 'the weapons which had (previously) been fixed on the walls were found on the ground' (Livy) 'in the second half of the fight, Paulus, although (earlier on) right at the beginning of the battle he had been seriously wounded by a sling-shot, nonetheless several times went on to attack Hannibal, with his soldiers in close formation' (Cicero) 'as for what had been said (earlier) on the Ides that a debate would be held on the Campanian farmland, in the end it didn't take place' The following example looks back to a conversation which had taken place at an earlier time and in another place: (Cicero) 'after this I came to those things which (earlier on) you and I had spoken about together in my Tusculan villa concerning your sister' The following refers to a time in the distant past: (Nepos) 'he was content to live in the same house that Eurysthenes, the forefather of his ancestors, had once used' Usually with this tense it is unnecessary to add an adverb meaning 'earlier', since it is implied in the tense, but in the following it is made explicit with the words : ([Caesar]) 'and with the same facial expression and manner of speech which he had been accustomed to use previously whenever he went to bed, he secretly took a sword into his bedroom and stabbed himself with it' In the following the meaning 'previously' or 'earlier on' is not explicit, but would fit the context: (Plautus) 'by chance (earlier on) I had forgotten to extinguish the lamp' (Propertius) 'I saw you (in a dream) confessing all the things which you had previously lied to me about' Pluperfect subjunctive with Like the pluperfect indicative with , the pluperfect subjunctive with sometimes refers to an earlier time, which is now over. In the following example, Cicero contrasts the time when Marcus Claudius Marcellus captured Syracuse (3rd century BC) with the period when Gaius Verres was governor of Sicily (73–70 BC): (Cicero) 'the harbour of the Syracusans, which at that time had been closed both to our fleets and to the Carthaginians', in the period of Verres' praetorship was laid wide open to a pirate vessel of Cilicians and to robbers' However, in the following examples, there appears to be little or no difference in meaning between the pluperfect with and that with , and difference is perhaps only one of style: (Cicero) 'I would have waited for your letter at Brundisium, if it had been permitted by the sailors' (Cicero) 'if it had been permitted to me by your legions to come to the senate, I would have done so' Because the feminine participle + makes a suitable ending for a hexameter, it is possible that in the following examples the double pluperfect is merely used for metrical convenience, rather than indicating an anterior time. In the first example, which is spoken by the ghost of Hector to Aeneas, encouraging him to flee from Troy, the tense with refers to an earlier time when Hector was still alive: (Virgil) 'if Troy could be defended by anyone's right hand, it would have been defended (while I was still alive) even by this one' The following unfulfillable wish also uses the double pluperfect subjunctive passive: (Virgil) 'I wish she had never been seized by such love of warfare or attempted to provoke the Trojans!' Another example comes from Ovid, referring to the time before the Trojan War started: (Ovid) 'if Tyndareus' daughter had not previously been raped, there would be peace between Europe and Asia' In the following example Ovid describes the fate of the Athenian princess Aglauros, who was turned to stone out of envy for her sister: (Ovid) 'she did not try to speak, nor, even if she had tried, would she have had any way of speaking' Subjunctive mood The subjunctive mood, when used in independent clauses, can be optative (used in wishes), jussive ('should', 'is to'), or potential ('would', 'could', 'may', 'might'). It is also frequently used in subordinate clauses and in indirect speech. The negative of the potential subjunctive is , and the negative of the optative and jussive subjunctive is . The subjunctive has four main tenses, which are as shown below for a 3rd conjugation verb and for the verb . The translations given here are only very approximate, since in fact each tense has a wide variety of meanings. (For the subjunctive of other verbs, see Latin conjugation.) Of these subjunctive tenses, the present and perfect are primary tenses, and are mainly used where the main verb of the sentence has a present or future meaning; while the imperfect and pluperfect are secondary or historic tenses, and are mainly used when the main verb has a past or an unreal potential meaning. However, there are sometimes exceptions: see #Sequence of tenses rule below for further details. In addition to the four main tenses above, there are also various periphrastic tenses using the future participle, such as , which are used for example in indirect questions. In main clauses Wishes The present subjunctive can express a wish for the future (the word is usually added): (Cicero) 'I hope I may see that day!' The negative is : (Cicero) 'may I not live if I know!' Less commonly, the perfect subjunctive expresses a wish for the past, leaving open the possibility that it may have happened: (Cicero) 'may I have prophesied correctly!' (Virgil) 'may it turn out that Trojan ill-fortune has followed us this far, no further!' The perfect subjunctive can also be used in a wish for the future, but this use is described as 'archaic'. (Cicero) 'but may the gods avert this omen!' The imperfect and pluperfect subjunctive are used in wishes to represent an imagined or wished for situation or event which is no longer capable of fulfilment: (Cicero) 'if only Servius Sulpicius were alive!' (Cicero) 'if only he had led out all his forces with him!' Sometimes or 'I would that' is used instead of . In the following sentence, the imperfect subjunctive is used to wish for something that cannot now come true, while the present subjunctive leaves open the possibility that it may be true: (Cicero) 'I wish it had been true about Menedemus; I hope it may be true about the queen' Jussive subjunctive When the present subjunctive has a jussive or hortatory meaning, it can be a suggestion or command in the 1st or 3rd person: (Catullus) 'let's live, my Lesbia, and let's love' (Gellius) 'let him sit here!' (Cicero) 'let them go out, let them depart; let them not allow poor Catiline to waste away with desire for them!' In philosophy it can set the scene for a discussion: (Cicero) 'let us suppose that a good man is selling a house' The jussive subjunctive is only used in the 2nd person when the person is indefinite: (Virgil) 'may you arise, some avenger, from our bones!' Another use of the present or perfect subjunctive is concessive: (Cicero) 'he may be a thief, he may be a temple robber, he may be the leader of all outrages and vices; nonetheless he is a good general!' (Cicero) 'he may have been so to others; when did begin to be so to you?' The present and imperfect subjunctives are also used in deliberative questions (which are questions which expect an imperative answer): (Cicero) 'what action should I take about Pompey?' (Virgil) 'what was I to do?' With the negative particle the perfect subjunctive can express a negative command: 'you should not fear death' As with wishes and conditional sentences, the imperfect and pluperfect subjunctives can represent a situation which, because it is in the past, cannot now be changed. They describe something which should have been done in the past, but which it is now too late for: (Virgil) 'you should have remained true to your words, o Alban!' (Cicero) 'he should have died, you will say' This usage is quite common in Plautus but rare in later Latin. The normal prose practice is to use either a past tense of 'I have a duty to' or 'it is proper' with the infinitive, or else a gerundive with a past tense of . The jussive pluperfect is also fairly uncommon. The following examples are from Cicero, again using the negative : (Cicero) 'you shouldn't have asked' (Cicero) 'what was it your duty to do? you ought to have returned the money, you ought not to have bought the corn' Possibility After the word 'perhaps' and occasionally after 'perhaps', the present subjunctive can mean 'may' or 'could', expressing a possibility. The first example below uses the present subjunctive, and the second the perfect: (Cicero) 'this may perhaps seem harsh' (Cicero) 'perhaps I have acted rashly' In the following sentence, using the pluperfect subjunctive, according to one view, Queen Dido contemplates what 'might have been': (Virgil) 'I could have carried torches into the camp and filled the gangways with flames' Others see the pluperfect subjunctive in this sentence as a wish ('if only I had carried!'); others again as jussive ('I ought to have carried!'). Conditional sentences A more common use of the potential subjunctive is in conditional sentences. Conditional sentences with the subjunctive are of two main kinds: the first is the 'ideal' conditional, which supposes an unlikely but theoretically possible future scenario ('if this were to happen'). This kind usually uses the present or perfect subjunctive. The second kind is the 'unreal' (or 'counterfactual'), which imagines a present or past situation which is definitely known to be untrue ('if this were true (which it isn't)'). This kind usually uses the imperfect or pluperfect subjunctive. Ideal type Examples of the ideal conditional are the following, using the present subjunctive: (Cicero) 'if I were to deny that this road is a rough one, I would be lying' (Cicero) 'if your country were to say this to you, wouldn't it be right for her to be granted her request?' The perfect tense may also (but rarely) sometimes be used in an ideal condition: (Juvenal) 'these days (if he were to come back to life) no one would give Cicero even two hundred coins, unless a huge ring glittered (on his finger)' (Plautus) 'if I were to hang myself now, I would simply end up having given pleasure to my enemies' Sometimes the perfect subjunctive refers to present or future time, and means 'could'. For example, in the following idiom the perfect is usual: (Cicero) 'I couldn't easily say (= I don't think) that I have ever seen anything more beautiful' In the following sentence both 'could' and 'could have' are possible: (Livy) 'I could have written that the number of captives was as many as sixty, if I were to follow the Greek authority' In other examples, however, the perfect subjunctive definitely refers to the past and means 'could have done' or 'would have done': (Livy) 'either of these passes would have brought (Hannibal) down to the Libuan Gauls' Ideal or future conditional in a past context Occasionally, an ideal conditional may be shifted to a past context, in which case the tense is the imperfect subjunctive. In indirect sentences of this kind there is in fact no difference between the vivid future and the ideal future conditional: (Livy) 'fearing that if he were to go without the permission of the consuls, he might be caught and dragged back by the Roman guards, he approached the senate' In the protasis of a conditional clause in indirect speech the imperfect subjunctive can similarly represent a future indicative: (Livy) 'with the intention of remaining inactive no doubt, unless (at some future time) the Etruscans were to attack of their own accord' It can also have a prospective or future meaning in a relative clause: (Cicero) 'before dawn he announced those vows which he was never to fulfil' Unreal (counterfactual) type The second type of conditional clause with the subjunctive refers to a situation or an event which is known to be contrary to fact (counterfactual). This kind is known as 'unreal'. If the condition refers to the present time, the imperfect subjunctive is used in both halves: (Cicero) 'I would write more about this to you, if you were in Rome' (Cicero) 'if the whole of Sicily were speaking with one voice, she would be saying (would say) this' (Plautus) 'if he were inside, I would call him out' The pluperfect is used for referring to unreal events in past time. This usage is found as early as Plautus: (Plautus) 'if you had called him, he would have replied' It is also possible for the protasis to be imperfect subjunctive, and the apodosis pluperfect subjunctive, or the other way round, as in the following examples: (Seneca) 'how would you react, if you had lost a friend?' (Livy) 'therefore if I had not given birth, Rome would not now be being attacked; if I did not have a son, I would have died as a free woman in a free country' Sometimes a potential imperfect subjunctive refers to a situation in the past rather than the present: (Cicero) 'if this man had been in his right mind (at that time), would he have dared to lead out an army?' Sometimes in poetry even a present subjunctive can be used to refer to an unreal past event, where in prose a pluperfect subjunctive would be used in both halves of the sentence: (Virgil) 'if his learned companion had not warned him, Aeneas would have rushed in and would have beaten aside the ghosts with his sword in vain' In early Latin, a present subjunctive can also be used to make an unreal conditional referring to the present: (Plautus) 'I wouldn't be asking you, if I knew' However, there was a gradual shift in usage, and in the classical period, and even sometimes in Plautus, the imperfect subjunctive is used in such clauses. Conditional clauses of comparison In a conditional clause of comparison ('as if...') the use of tenses is different from the normal unreal conditional clause. Here the main clause is in the indicative or imperative, and the 'if'-clause follows the sequence of tenses rule, with present or perfect subjunctive for an imaginary present situation, and imperfect or pluperfect for an imaginary past one: (Quintilian) 'don't be nervous as if you were giving a penny to an elephant' (Livy) 'fear overcame the senators as great as if the enemy were already at the gates' For other examples of this see Latin conditional clauses#Conditional clauses of comparison. Generalising 2nd person When a conditional sentence expresses a generalisation, the present subjunctive is used for any 2nd person singular verb, whether in the subordinate clause or the main clause: This generalising subjunctive is found either in a subordinate clause or in the main clause: (Cato) 'if ever you use iron, it gets worn away' (Seneca) 'old age is full of pleasure, if you know how to enjoy it' (Plautus) 'whenever there's a shortage of something, you want it' The 2nd person imperfect subjunctive when potential is nearly always indefinite and generalising, i.e. an imaginary 'you': (Livy) 'you would have believed them beaten' In subordinate clauses Subordinate clauses containing the subjunctive generally follow the sequence of tenses rule (see below). That is, if the context is present or future, the present or perfect subjunctive is used; but if the context is past, the imperfect or pluperfect. Indirect command The present or imperfect subjunctive is very commonly found in indirect commands. In the following examples the subjunctive reflects an imperative in direct speech: (Livy) 'she sends a messenger (to say) that they should come' (Nepos) 'he gave him an order that he should go round all the doors of the building' Purpose clauses The present or imperfect subjunctive is also used in purpose clauses with such as the following: (Rhētorica ad Herennium) 'you should eat so that you can live, not live so that you can eat' A purpose clause can also follow 'who' in sentences such as the following. The context is historic, so the tense of is imperfect subjunctive: (Nepos) 'the Spartans sent ambassadors to Athens in order to accuse him in his absence' After verbs of fearing A present or imperfect subjunctive can also have a prospective or future meaning after a verb of fearing or expecting. These are usually introduced by : (Nepos) 'fearing that he might be handed over to the Romans, he came to Crete' The two negatives and can be combined: (Cicero) 'I am afraid that I may not be granted my wish' Consecutive (result) clauses In consecutive (result) clauses, the sequence of tenses rule is not so strictly adhered to. For example, in the following, the perfect subjunctive is used, despite the fact that the main verb is historic: (Cicero) 'he showed himself to be so tough and energetic that no one ever saw him sitting on a horse' However, after verbs meaning 'it happened that...', the imperfect subjunctive is always used even of a simple perfective action, which, if the grammatical construction did not require a subjunctive, would be expressed by a perfect indicative: (Nepos) 'it happened that in a single night all the statues of Hermes were thrown down except one' Indirect questions One of the most common uses of the subjunctive is to indicate reported speech. When a question is made indirect, the verb is always changed into the subjunctive mood. The tense of the subjunctive depends on the tense of the main verb, according to the sequence of tenses rule (see below). When the main verb is present tense, therefore, the present or perfect subjunctive is usually used in the subordinate clause. The present subjunctive can represent what would be a present indicative if the question was direct: (Catullus) 'do you perhaps ask why I do that?' The perfect subjunctive usually represents what would be a perfect indicative in an independent clause. However, since there is no way of expressing an imperfect tense in primary sequence except using the perfect subjunctive, it could also sometimes represent an imperfect indicative. The perfect subjunctive is generally found when the main verb is one of the primary (i.e. non-past) tenses. One common use is in indirect questions when the context is primary: (Nepos) 'from this it will be easily conjectured how dear he was to his people' (Cicero) 'we do not yet know what the ambassadors have done' (or 'were doing', or 'did') (Seneca the Elder) 'I don't know whether I was standing or sitting' In indirect questions in a historic context, an imperfect subjunctive usually represents the transformation of a present indicative: (Cicero) 'Epaminondas asked whether his shield was safe' In indirect speech, the pluperfect subjunctive is often a transformation of a perfect indicative in direct speech. In the following example, the original direct question would have had the perfect tense (): (Cicero) 'I asked Catiline whether he had been at a night-time meeting at Marcus Laeca's house or not' Unlike in clauses following (see below), in indirect questions referring to a future time the periphrastic future subjunctive is regularly used: (Cicero) 'it is uncertain what he is going to do' (Cicero) 'people were waiting to see what exactly he was going to do' Phrases of the kind 'I do not doubt' are usually followed by and the subjunctive, much like an indirect question. (The literal meaning of is 'how not'.) The sequence of tenses rule is followed, so that in the following example, a present tense main verb is followed by a perfect (not imperfect) subjunctive: (Cicero) 'I have no doubt that you were very busy' (original or ) After , if the context is clearly future, a present or imperfect subjunctive can sometimes represent a future tense or potential subjunctive: (Caesar) 'they said that if these things were reported to Ariovistus, they didn't doubt that he would put them all to death' (Cicero) 'and I didn't doubt that, if the republican government were restored, it would bring me back with it' However, in the following example after , the imperfect subjunctive represents the transformation not of a future, but of a present indicative: (Curtius) 'nor did the Persians doubt that, now that they had abandoned Issus, the Macedonians were fleeing' To avoid ambiguity, the periphrastic future can also be used when the meaning is future, although this is not as common as in indirect questions: (Cicero) 'I have no doubt you are going to remain (in the villa) at Formiae' Subordinate clauses in indirect speech Verbs in subordinate clauses in indirect speech are also almost always in the subjunctive mood. This also applies to subordinate clauses when the indirect speech is only implied rather than explicit. Both of the following examples have the perfect subjunctive: (Cicero) 'Caesar is pardoning me by means of a letter for the fact that I didn't come' (Plautus) 'my mother is angry because I didn't return' Similarly in the protasis ('if' clause) of a conditional sentence in indirect speech, a present subjunctive can represent an original future indicative: (Caesar) '(Ariovistus told Caesar that) if he did not retreat and withdraw his army, he would treat him as an enemy' In other examples in reported speech, the subjunctive in the 'if' clause represents an original present subjunctive with potential meaning: (Cicero) 'I believe that Pleasure, if she were to speak for herself, would give way to Dignity' In some sentences, the pluperfect subjunctive is a reflection of an original imperfect indicative, as in the following example, where the original verbs would have been and : (Livy) '[he said] that they begged just one favour, that they should be not assigned lower ranks than those which they had held when they were on military service' In other sentences, the pluperfect subjunctive is a transformation of a future perfect indicative, put into historic sequence. The original words of the following sentence would presumably have been 'if you do (will have done) otherwise, you will be doing Caesar a disservice': (Cicero) 'he said that if the man were to do otherwise, he would be doing Caesar a disservice' (Livy) 'at this critical moment in the battle, the propraetor vowed games to Jupiter, if he routed and slaughtered the enemies' Generic clauses A relative clause which is indefinite ('such as') uses the subjunctive mood in Latin. This is known as a generic relative clause: (Cicero) 'but there are also some who are saying, Roman citizens, that it was by me that Catiline was sent into exile' (Cicero) 'there was no one in Lilybaeum who did not see it' The subjunctive can also follow in a restrictive clause: (Cicero) 'the speeches of Cato, at least such as I have discovered' Causal clauses The subjunctive mood is also used in clauses which have a causal meaning ('in view of the fact that'), such as after causal . Any tense can be used including the present: (Cicero) 'since these things are so' In the following sentence the perfect subjunctive is used after with a causal sense ('inasmuch as' or 'in view of the fact that'): (Cicero) 'I must be blind that I didn't see this before!' Circumstantial cum Another very common use of the imperfect and pluperfect subjunctive is the circumstantial -clause with the imperfect or pluperfect subjunctive. Here the imperfect subjunctive has the same meaning as an imperfect indicative would have if cum were omitted: (Cicero) 'while I was sitting at home in a sad mood,' he said, 'Venerius came running up' (Cicero) 'when he heard this, he is said to have hurried to Rome' Just as the verb 'I am' has a future infinitive , short for , so it also has a subjunctive , short for . This is not used in Caesar, but is common in Livy, Sallust, and Nepos. It is used especially in conditional sentences, either in the protasis ('if' clause) or the apodosis (main clause), and it generally has either a potential or future-in-the past meaning. Foret in an apodosis In some authors, such as Livy and Sallust, a potential meaning can be given to the pluperfect subjunctive passive by substituting for : (Livy) 'and the army would have been annihilated if the woods hadn't provided cover for those who were fleeing' (Livy) 'and the city would have been besieged, if the consul Horatius had not been recalled' (Sallust) 'but if Catiline had not been late in giving his allies a sign in front of the senate, on that day the worst crime in the history of Rome would have been committed' In other authors, however, the same meaning is expressed using a perfect participle + : ([Caesar]) 'but if night hadn't interrupted the battle, Caesar would have gained control of the whole enemy fleet' Foret in a protasis Another use of is in indirect speech after 'if' as the equivalent of the future indicative in the original direct speech: (Livy) 'Tullus ordered him to keep the young men armed; he would need their help if (at some future time) there was a war with the people of Veii' (Cicero) '(he was confident) that even if (the pain) were to be very great, it would be brief' Combined with a perfect participle, it is the equivalent of a future perfect passive in the original speech: (Sallust) 'he was seeking the consulship, hoping that if he should be elected he would easily manage Antony according to his pleasure' (Livy) 'the senators began to be afraid that if the army were dismissed, there would be further secret meetings and conspiracies' (Gellius) 'for fear that, if those letters were to be captured by the enemy, their plans might be known, they used to send letters made in this way' In each of the above sentences, looks to the future, relative to a point in the past. Foret with future meaning It can also be used with a future-in-the-past meaning in sentences like the following, which are not conditional but indirect speech: (Cicero) 'he said that he would far rather die in his own bed, whatever might happen in future' (Sallust) 'and it is said that he did this so that (in future) they would be more trustworthy to one other' (Livy) 'some were standing still, uncertain whether it would be safer to go forward or to retreat into the camp' (Gellius) 'Aristotle replied that he would do what they wanted when it was a suitable time for him' (Cicero) 'the oracle told Priam that he should forbear to raise the first son who was going to be born to him subsequently' In the following sentence the imperfect is typical of letter-writing. An English writer would say 'I have no doubt that he will be...': (Cicero) '(at the time of writing this) I personally had no doubt that he would be in Apulia any moment now' Foret with non-future meaning In other sentences, however, has no future meaning, but simply has the meaning of , as in the following example, where it appears to be used simply for metrical convenience as the equivalent of in the second half: (Ovid) 'if it were made of ash-wood, it would be light in colour; if cornel-wood, there would be a knot in it' Similarly in the following sentences, it has a past, not future, meaning: (Terence) 'if I had ever seemed either of these things to you, I wouldn't have been made a mockery of by your deeds in this way' (Livy) 'blaming one another because, when it had been possible, not all the enemy fleet had been sunk or captured' Archaic forms of the subjunctive Archaic present subjunctive An archaic form of the subjunctive of is for , which is very common in Plautus and Terence, but fell out of use later: (Terence) 'do you know where she is?' Less common is , with the same meaning. This occurs occasionally in Plautus and also once in Lucretius (4.635) and once in Virgil's Aeneid, where the archaic form is presumably appropriate for the speech of the god Jupiter: (Virgil) 'whether it be Trojan or Rutulian, I shall make no distinction!' Another old subjunctive is , from the verb 'I give'. It occurs mostly in Plautus and Terence, but sometimes also in Cicero, in phrases like the following: (Plautus) 'may the gods destroy you!' Sigmatic aorist subjunctive In old Latin, a form of the subjunctive with -s-, known as the sigmatic aorist subjunctive, is preserved ( etc.). One use of this is for wishes for the future: (Plautus) 'may the gods preserve you always!' (Cicero) 'may the gods ensure that it be allowed' In Plautus this subjunctive is also used in prohibitions, when it exists: (Plautus) 'don't worry about me!' In other phrases it has a potential meaning and can be translated with 'would': (Plautus) 'I would willingly do him harm!' (Livy) 'I do not know exactly, nor, if I knew, would I dare to say' The imperative mood The imperative mood has two tenses, present and future. Present imperative Positive commands The present imperative mood is the normal tense used for giving direct orders which the speaker wishes to be carried out at once. The active form can be made plural by adding -te: (Catullus) 'give me a thousand kisses, then a hundred!' (Livy) 'give me your right hands and your oath!' Deponent verbs such as 'I set out' or 'I follow' have an imperative ending in -re or -minī (plural): (Cicero) 'the gates are open: depart!' (Terence) 'follow me this way inside, both of you' Negative commands An imperative is usually made negative by using (literally, 'be unwilling!') plus the infinitive: (Seneca the Elder) 'don't be surprised' However, in poetry an imperative can sometimes be made negative with the particle nē: (Virgil) 'do not terrify me, who am already scared, obscene birds!' A negative order can also use the perfect subjunctive: (Cicero) 'do not be afraid on my account' In later Latin, plus the present subjunctive became more common, for example in the Vulgate Bible. In the following example the first three verbs use the present subjunctive, and the third the perfect subjunctive: (Mark, 10.19) 'do not commit adultery, do not kill, do not steal, do not speak false testimony' Future imperative Latin also has a Future imperative or 2nd imperative, ending in -tō(te), used to request someone to do something at a future time, or if something else happens first. This imperative is very common in early writers such as Plautus and Cato, but it is also found in later writers such as Cicero and Martial: (Plautus) 'ask tomorrow; it will be given to you' (Terence) 'when we have finished washing, get washed if you wish' (Cato) 'if you are (going to be) eating it (cabbage) raw, dip it in vinegar' (Cicero) 'if anything happens, write to me' (Martial) 'Sextillus, laugh a lot at anyone who calls you a 'faggot' and show them the middle finger' Some verbs have only the second imperative, for example 'know', 'remember'. In this case the imperative often has a present rather than future meaning: (Cicero) 'know that I have been blessed with a little son, and that Terentia is safe' 3rd person formal imperative Related to the colloquial future imperative is the formal imperative (usually used in the 3rd person) of legal language, as in this invented law from Cicero's : (Cicero) 'there shall be two men with royal power; and from consulting they are to be called 'consuls'; they are to obey nobody; for them the welfare of the people shall be the supreme law' According to J.G.F. Powell, is not a genuine archaic form; in early Latin is used only in deponent verbs and is 2nd or 3rd person singular. Periphrastic tenses A series of periphrastic tenses can be formed by combining a future participle (e.g. 'going to lead') or a gerundive (e.g. 'needing to be led') with any tense of the verb 'I am', as follows: The passive tenses formed with the gerundive are known as the 'periphrastic conjugation of the passive'. Although the two series are similar in appearance, they are not parallel in meaning or function. Woodcock writes of the passive form: 'But for the introduction of the idea of necessity, it would form a periphrastic future passive tense parallel to the periphrastic future active.' For this reason, examples of the gerundival periphrastic tenses are gathered in a separate section below. Periphrastic indicative Periphrastic future The present version of the future periphrastic describes a person's intention at the present time. It can be translated with 'going to', 'planning to', 'intending to', or by using the future continuous 'I'll be doing': (Cicero) 'Paulla Valeria is going to marry Decimus Brutus' (Cicero) 'I'm not going to leave until the money is paid' Future periphrastic future Despite its name, the future periphrastic tense is really a present tense, describing a person's present intentions. For this reason, it can have a future form , used for example in future conditional or future temporal clauses: (Cicero) 'but if you come to any arrangement with Silius, even if it is on the very day I'll be arriving at Sicca's house, please let me know' (Martial) 'daring Leander was shouting in the swelling waves: "drown me, waves, when I'll be coming back"' Imperfect periphrastic future A past version of the periphrastic future can be made with the imperfect tense of sum, describing what someone's intentions were at a moment in the past: (Servius to Cicero) 'on the next day he was intending to sail to Italy' (Ovid) 'when she saw her face and horns in the water, "o poor me!" she was going to say, but no words came out' In a conditional sentence this tense can mean 'would have done': (Ovid) 'I was going to remove the faults (i.e. I would have removed them), if I had been free to do it' Perfect periphrastic future Although less common than the periphrastic future with , the perfect tense version of the periphrastic future is also found: (Seneca) 'on the day Cato was defeated in the election, he played; on the night he was going to die, he read' This tense can also be potential, expressing the meaning 'would have done': (Curtius) 'if I had not obeyed you, I would rightly have paid the penalty' Pluperfect periphrastic future An example of this tense is the following: (Livy) '... whom the Senate had been intending to order should be declared dictator' Periphrastic subjunctive Periphrastic present subjunctive In indirect statements and questions, the active periphrastic future can represent a future or periphrastic future tense of direct speech in primary sequence. In this case there is not necessarily any idea of planning or intention, although there may be: (Cicero) 'I don't know when I'm going to see you' (Cicero) 'let me know in detail what you are doing and whether at all you'll be coming to Italy this winter' This tense can also be used in primary sequence reported speech, to represent the main clause in either an ideal conditional sentence or a simple future one (the distinction between these two disappears in indirect speech): (Cicero) 'we ourselves have never seen such a (perfectly wise) man; but it is explained in the opinions of philosophers what such a person would be like, if one were ever to exist' Periphrastic imperfect subjunctive If the main verb is in past time, an imperfect version of the periphrastic future subjunctive is used: (Cicero) 'I wasn't sure whether you were going to receive this letter' It is also possible to form an imperfect periphrastic subjunctive with instead of (the first instance of this is in Sallust): (Sallust) 'he said that he had come to ask him whether he was intending to make peace or war' Periphrastic perfect subjunctive A perfect periphrastic subjunctive can be used with a conditional meaning ('would have done') in hypothetical conditional clauses in indirect questions in primary sequence. In this case it represents a pluperfect subjunctive in the original direct speech: (Livy) 'tell us, Appius Claudius, what you would have done, if you had been censor at that time?' (Cicero) 'can anyone doubt that if Quintus Ligarius had been able to be in Italy, he would have been of the same opinion as his brothers were?' In an indirect question, the perfect periphrastic subjunctive can also sometimes reflect a potential imperfect subjunctive: (Seneca) 'imagine how much speed you would be putting on, if an enemy were threatening you from behind!' These tenses can be compared with the similar examples with the perfect periphrastic infinitive cited below, where a conditional sentence made in imperfect subjunctives is converted to an indirect statement. Periphrastic pluperfect subjunctive The pluperfect version of the periphrastic subjunctive can be used in a circumstantial clause: (Cicero) 'when Antony had been about to bring some motion about the republic, a message suddenly arrived about the 4th legion and he lost his composure' It can also be used in conditional sentences after , as in the following sentence from an imaginary letter from Helen to Paris: (Ovid) 'by flatteries such as these, if I had been going to sin, I might have been persuaded' Once in Cicero it occurs in the apodosis of an unreal conditional, referring to the inevitability of fate: (Cicero) 'even if he had obeyed the auspices, the same thing would have been destined to happen; for the fates cannot be changed' It can also reflect a potential pluperfect subjunctive ('would have done') in historic sequence in an indirect question: (Livy) 'it occurred to them how impossible Etruria would have been, if anything had gone wrong in Samnium' Gerundive tenses Present gerundive The gerundive of the verb (an adjectival form ending in -ndus) can be combined with the verb 'I am' to make a passive periphrastic tense. This usually expresses what is needing to be done: (Pliny) 'I don't need to be asked or encouraged' (i.e. I will do it willingly) (Celsus) 'tumours of this kind need to be lanced' Negative The negative gerundive usually means 'not needing to be', as in the first example above. However, sometimes the interpretation 'ought not to be' or 'it isn't possible for it to be' is more appropriate: (Seneca) 'you do not need to be reminded now that no one is good except the wise man' (Ovid) 'the story of Achilles shouldn't (or can't) be told using the metre of Callimachus' Impersonal construction Very often the passive periphrastic is used impersonally, together with a dative of the agent: (Cicero) 'a decision needs to be made by you today' The impersonal form of this tense can also be made using intransitive verbs such as 'I go' and verbs such as 'I persuade' and 'I use' which do not take an accusative object: (Cicero) 'there is no need to reply to everything' (Cicero) 'I have to go to Arpinum' (Cicero) 'you must use your judgement' Future gerundive An example of a future gerundive periphrastic is the following: (Cicero) 'since that isn't possible, we will need to ask my friend, Marcus Plaetorius' Imperfect gerundive An example of the imperfect passive periphrastic is the following: (Cicero) 'he was afraid not only of those things which needed to be feared, but everything' Perfect gerundive As with the active perfect periphrastic, in a conditional sentence the perfect gerundive periphrastic tense can mean 'would have done': (Livy) 'if you had delayed just one day, you would all have died' Another meaning of the perfect passive is 'ought to have been done': (Cicero) 'either his army should have been taken away or he should have been given the command' In the following result clause, this tense becomes subjunctive: (Cicero) 'what you write about Pomptinus is correct: for the fact is that, if he is going to be in Brundisium before the 1st June, it wasn't so necessary for Marcus Anneius and Lucius Tullius to have been urged to hurry' Future perfect gerundive The active future perfect periphrastic tense is not found, but the passive occurs: (Vitruvius) 'whenever (at some future time) it is necessary for a building to be made (using local stone), the stones for it should be quarried two years in advance' For gerundive infinitive tenses see #Gerundive infinitives below. Sequence of tenses rule Latin speakers used subjunctive verbs to report questions, statements, and ideas. When the verb of telling or asking in the dominant clause is primary, the subjunctive verb in the dependent clause must also be primary; when the verb in the dominant clause is secondary or historic, the subjunctive verb in the dependent clause must also be in a historic tense. This rule can be illustrated with the following table: This rule applies to all kinds of sentences where the dependent verb is put in the subjunctive mood, for example indirect speech, indirect questions, indirect commands, purpose clauses, most consecutive clauses, clauses after verbs of fearing, clauses and others. It does not apply to more loosely connected dependent clauses, such as relative clauses, where the verb is in the indicative. The perfect tense appears in both rows, depending on whether it has a present perfect meaning ('have done', primary) or past simple meaning ('did', historic). But even when it has a present perfect meaning it is often treated as a historic tense (see further below). Examples of primary sequence Some examples of primary sequence are the following: Present indicative + present subjunctive: (Cicero) 'they ask where it is' Present subjunctive + present subjunctive: (Cicero) 'I hope it is true' Present imperative + periphrastic perfect subjunctive: (Livy) 'tell us what you would have done' Present indicative + Perfect subjunctive: (Cicero) 'I wonder what the reason was that you changed your plan' Examples of historic sequence Imperfect indicative + imperfect subjunctive: (Cicero) 'people were asking where he was' Imperfect subjunctive + pluperfect subjunctive: (Cicero) 'I wish it had been true' Perfect indicative + imperfect subjunctive: (Cicero) 'the Senate decreed that he should go at once' Historic infinitive + imperfect subjunctive: (Sallust) 'he constantly urged that they be always on their guard and prepared' Perfect tense main verb When the main verb is a perfect tense, it is usually considered to be a historic tense, as in the above example. Occasionally, however, when the meaning is that of an English present perfect, the perfect in a main clause may be taken as a primary tense, for example: (Cicero) 'I haven't yet quite made my mind up whether our friend Trebatius has brought me more trouble or pleasure' (Plautus) 'he has sent me home ahead of him so that I can take this news to his wife' However, the historic sequence after a perfect with present perfect meaning is also very common, for example: (Cicero) 'you have compelled me to confess' (Plautus) 'at last I've got him to go away!' Historic present main verb When the main verb is a historic present, the dependent verb may be either primary or historic, but is usually primary: (Livy) 'she sends a message that they should come' (both verbs primary) (Livy) 'they send ambassadors (who were) to ask for peace' (second verb historic) Sometimes both primary and historic are found in the same sentence. In the following example the first dependent verb is primary sequence, but is pluperfect: (Cicero) 'he asked him to pay attention to what he had said' Exceptions to the sequence of tenses rule There are frequent exceptions to the sequence of tenses rule, especially outside of indirect speech. For example, in the following sentence, a historic tense is followed by a perfect subjunctive: (Cicero) 'what soldier was there who did not see her in Brundisium?' In consecutive clauses also, a perfect tense in the main clause is often followed by a present or a perfect subjunctive: (Cicero) 'Verres so harried Sicily for three years that it cannot be restored to its original state.' In indirect conditional sentences, the periphrastic perfect subjunctive often remains even after a historic-tense main verb: (Livy) 'nor was there any doubt that if they had been able, the enemies would have turned their backs' The perfect tense also can replace a pluperfect tense with the meaning 'could have' even after a historic verb: (Livy) 'there was no doubt that, if that delay had not intervened, the Carthaginian camp could have been captured on that day' Caesar and Sallust can sometimes use a present subjunctive in historic sequence when the meaning is jussive (although this practice is not always followed): (Sallust) 'he replied that if they wished to make any request from the Senate, they should disarm' In general, in Livy, there is a tendency for a present or perfect tense of the original speech to be retained in historic sequence, while Cicero is more strict in following the historic sequence. When the main verb is primary, an imperfect or pluperfect subjunctive in a clause that is already subordinate in the original sentence may often remain: (Livy) 'tell us what you would have done if you had been censor?' In the following, a perfect subjunctive (a primary tense) is treated as if it were a perfect indicative (a historic tense), and so is followed by an imperfect subjunctive in the subordinate clause: (Cicero) 'but how it happened that you suspected this from my earlier letters, I don't know' Infinitive tenses Overview The various tenses of the infinitive are as follows: The present passive and deponent infinitive usually ends in -rī (e.g. 'to be loved', 'to promise'), but in 3rd conjugation verbs in -ī only (e.g. 'to be captured', 'to follow'). In 1st conjugation verbs, the ending is very often shortened to , e.g. 'to have loved'. The irregular verbs 'I am able' and 'I want' have no future infinitive. In these verbs the present infinitive is used instead. In early Latin (especially Plautus), the passive and deponent infinitive often ends in -ier: 'to be scolded', 'to be seen', 'to obtain', 'to wake up' etc. Compound infinitives The compound infinitives are usually found in the accusative case, as in most of the examples below. Occasionally, however, they are found in the nominative, for example with 'he is said' or 'he seems': (Cicero) 'he is said to be planning to come' The participle can also change to show gender and plurality, as in the following where is feminine plural: (Cicero) 'it is agreed that an ambush was made' However, the passive future infinitive () is made using the supine of the verb. The -um therefore stays constant and does not change for gender or number. The future infinitive is used only for indirect statements. Omission of Often the part of a compound infinitive is omitted when combined with a participle or gerundive: (Seneca) 'he heard that his brother had been killed' (Caesar) 'Pompey reckoned that it was necessary for him to attempt to attempt something' (Cicero) 'I am sure that I will come to the city soon' Historic infinitive The present infinitive is occasionally used in narrative as a tense in its own right. It usually describes a scene in which the same action was being done repeatedly. There are often two or more historic infinitives in succession. When the subject is expressed, it is in the nominative case (distinguishing the historic infinitive from the accusative and infinitive of reported speech). (Sallust) 'then there was a ghastly spectacle on the open plains: people kept chasing, fleeing, being killed, being captured' (Cicero) 'the poor man kept shouting, as he was being dragged away, that he had done nothing' (Cicero) 'he by turns kept begging them, then threatening, now offering hope, now fear' 'Could have done' The perfect tense with the infinitive can often mean 'I was able to' or 'I managed to': (Cicero) 'Scipio managed to make Publius Rupilius Consul, but he wasn't able to do the same for Rupilius's brother Lucius' However, it can also mean 'I could have done (but did not)': (Cicero) 'what I was and what I could have been, I can now no longer be' (Juvenal) '(Cicero) could have despised Antony's swords (i.e. would have had no reason to fear them), if he had spoken everything in this way!' (Cicero) 'you ask what more Plancius could have achieved, if he had been the son of Gnaeus Scipio' The pluperfect subjunctive after also means 'could have': (Livy) 'although he could have led them out into battle, Aemilius held his troops inside the wall of the camp' 'Ought to have done' 'Ought to have done' is often expressed with a past tense of 'I have a duty to' or 'it is fitting' together with a present infinitive: (Cicero) 'you ought not to have come to the Senate on that day' (Cicero) 'you ought to have been put to death long ago by order of the Consul, Catiline!' Sometimes, means 'it must be the case that...': (Cicero) 'if there was a lot of dust on his shoes, he must have been coming from a journey' Sometimes, in familiar style, can be used with the perfect infinitive passive: (Cicero) 'this ought to have been done long ago' The indirect speech form is regularly with the present infinitive: (Cicero) 'they say I ought not to have built the house' Indirect commands with the infinitive Indirect commands are made with two constructions: either (or ) with the present or imperfect subjunctive, or the accusative and infinitive construction, using the present infinitive. The latter construction is used especially when the main verb is 'I order' or 'I forbid', but also sometimes after 'I command': (Caesar) 'he ordered the signal to be given' (Cicero) 'what tyrant has ever forbidden unhappy people to mourn?' Indirect statements The infinitive is very commonly used for the reported verb in indirect statements. Except with passive sentences using 'he is said' or 'he seems' and the like, the subject of the quoted sentence is usually put into the accusative case and the construction is known as an 'accusative and infinitive'. The rule of tense is that the present infinitive is used for any action or situation which is contemporary with the main verb, the perfect for actions or situations anterior to the main verb, and the future infinitive for actions or situations later than the main verb. An exception to this rule is the verb 'I remember', which when used of personal reminiscence (e.g. 'I remember being present') is usually followed by a present infinitive. Present infinitive The present infinitive is used to express an action or situation simultaneous with the verb of speaking: (Cicero) 'Solon pretended that he was mad' (Livy) 'Brutus noticed that an attack was being made on him' The present infinitive used after when describing a personal reminiscence, however, refers to the past: (Seneca the Elder) 'I remember going into his school when he was just about to recite a speech against Milo' Perfect infinitive In indirect statements, a perfect infinitive represents an event or a situation which occurred prior to the time of the verb of speaking. The first two examples have a verb of speaking in the present tense: (Cicero) 'you say that your bodyguard was killed' (Cicero) 'he is said to have written these books at that time' In the following the main verb is in a past tense, so that in English the pluperfect is used when translating the infinitive: ([Caesar]) 'they learnt that Caesar had come' (Servius to Cicero) he reported to me that Marcus Marcellus had been stabbed with a dagger and had received two wounds' The infinitive can describe a situation in the past, earlier than the time of the verb of speaking: (Livy) 'they say that his father was a butcher' The perfect infinitive may also at times be translated with a continuous tense in English, representing an imperfect tense in the original speech: (Nepos) 'it is said that at that time the mother of Pausanias was still living' Often the verb of speaking is omitted if it can be understood from the context: (Livy) 'a terrible thing had happened' (she said) Perfect infinitive with Occasionally a perfect passive infinitive is found formed with instead of . The meaning of the two forms is different. The perfect infinitive with merely refers to an event which took place before the time of the verb of speaking (e.g. ('he reported that Marcellus had been killed'). Thus there are two times involved, the time of the verb of speaking and the time of the event referred to. But when the perfect infinitive has there are three times involved: the time of the verb of the speaking, the reference time, and a time earlier still when the event took place. Just as a perfect tense can describe a current situation (e.g. 'he has died' = 'he is dead'), so a double perfect infinitive often describes a situation that existed at the time referred to, as in the following examples: (Asconius) 'it appears from the speech that while the trial was in progress, an army had been stationed in the forum by Gnaeus Pompeius' (Plautus) 'are you saying that (at the time when you saw me) I was wearing (lit. was dressed in) a lady's mantle?' (Hyginus) 'Hercules eventually came to Prometheus, who, as we said above, had (earlier) been chained up / was at that time chained up in the Caucasus mountain' (Cicero) 'finally a man who, it was established, had been stationed there to kill Gnaeus Pompeius was arrested with a weapon near the Senate' (Cicero) 'it is sufficient to show that (at the time he was killed) for Clodius great hope had been placed in Milo's death' In other examples, the double perfect infinitive describes a situation which existed earlier on, but which later changed: (Cicero) 'I found out that (until you got to know him better) he had previously been unfairly suspected by you' (Ovid) 'Zancle (= Messina in Sicily) too is said to have been formerly joined to Italy, until the sea took away the common boundary' (Livy) 'Tanaquil addressed the people: she said that the king had (earlier) been knocked unconscious by the sudden blow, but he had now recovered' (Cicero) 'and (they are saying) that (the statue) had originally been placed there and dedicated in the name of that general (but later Gaius Verres removed it)' It is also possible to find this infinitive in contexts not in indirect speech. In the following example the infinitive refers to an action which took place at an earlier period before the time of the imagined harvest, which is itself in the past: (Quintilian) 'it's possible for a place to have been sown (earlier) where (later) there was no harvest' The distinction between the two types of perfect infinitive is available only in passive verbs. When the verb is active, the simple perfect infinitive is used in a similar context: (Quintilian) 'it is possible for a woman who did not give birth to have (earlier) slept with a man' Another example not in direct speech the following, in which Martial is describing a magnificent he-goat depicted on a cup, and suggests that Phrixus's sister Helle might have preferred to have been riding on this rather than the ram which she fell off: (Martial) 'his sister might well have preferred to have been carried by this (before she died)' There appear to be no examples of a deponent verb in this tense of the infinitive in classical Latin. Future infinitive The future infinitive is used for events or situations in reported speech which are to take place later than the verb of speaking: (Cicero) 'I am sure that you will do everything' As with the perfect passive infinitive, is often omitted: (Nepos) 'they would easily find the place where he was (he said)' The verb 'I am able' has no future infinitive, but the present infinitive can have a future meaning: (Caesar) 'they hope that they will be able to gain control of the whole of Gaul' A future passive infinitve can be made using the supine of the verb combined with , the passive infinitive of the verb 'I go'. This is comparatively rare. The ending does not change for gender or number: (Terence) 'a rumour came that a gladiator show was going to be given' Another way of expressing the future in indirect statement is to use the phrase 'it would be the case that'. This can be used with an active or passive verb, and almost always with either the present or the imperfect subjunctive: (Cicero) 'I hope that we shall have that good fortune' (Tacitus) 'the astrologers replied that (Nero) would become Emperor, but that he would kill his mother' (Cicero) 'they all thought that the poor man was going to be beaten with rods' Sometimes or is used instead of : (Cicero) '(the voice predicted) that unless precautions were taken, Rome would be captured' Future perfect infinitive To express a future perfect tense in indirect statement is possible only if the verb is passive or deponent. In the following examples, a perfect participle is combined with the future infinitive : (Livy) 'the Carthaginians thought that the war was soon going to have been brought to an end' (Cicero) 'if someone were to remove fear, all carefulness of life would have been removed too' (Cicero) 'I can say this, that I will have achieved enough, if no danger redounds on me' Very rarely can be followed by a perfect or pluperfect subjunctive. In the following example, the pluperfect subjunctive represents a future perfect indicative of direct speech: (Cicero to Plancus) 'I hope (epistolary imperfect) that by the time you receive this letter, what I requested from you in my earlier letter will have been granted' Periphrastic perfect infinitive The periphrastic perfect infinitive (or potential infinitive) is formed from the future participle with . It is used in indirect speech for representing the main verb of an unreal conditional, whether referring to a past time or present time. In the following examples the verb refers to past time, and in the original sentence would have been pluperfect subjunctive: (Livy) 'but take this message to him, that I would have died better if I had not married on the day of my funeral!' (Cicero) 'he said that if I had not been consul, the republic would have been completely finished' If the introductory verb is passive, such as 'he seems', the participle is nominative: (Quintilian) 'it is unlikely that he would have told a lie unless he had been desperate' The same tense of the infinitive can also represent the transformation into indirect statement of an imperfect potential subjunctive, referring to a hypothetical present situation: (Cicero) 'do you think any old woman would ever be so crazy as to believe in dreams if they didn't come true by chance sometimes?' (Pliny) 'what do we think would be happening to him if he were alive?' – 'he would be dining with us!' (Cicero) 'they confess that they would not lift a finger for the sake of virtue, unless virtue itself gave pleasure' In such sentences the imperfect subjunctive in the subordinate clause (in this case ) is left unchanged, despite the fact that the main verb is primary. Just as is used to make a future passive infinitive, so with the imperfect subjunctive can be used to make a potential passive infinitive: (Caesar) 'if at that very moment certain reports had not arrived bringing news of Caesar's victory, most people reckoned that (the town) would have been lost' However this is very rare, and in fact only two instances have been noted (the other being Cicero, Tusc. 3.69). Gerundive infinitives Gerundive infinitives can be formed with and . The present gerundive infinitive with is used in indirect speech to indicate something which needs to be done at the time of the verb of speaking: (Cicero) 'I think something should be given to the doctor himself' The ending of the gerundive varies according to gender and number. In the following it is feminine singular: (Gellius) 'he says that in the name "Valerius" in the vocative case, the first syllable should be accented' The order of the words can be reversed: (Cicero) 'I do not think there is any need to doubt' The perfect gerundive infinitive with indicates something that was necessary at a previous time: (Cicero) 'I imagine that it was unavoidable for you to undertake that journey to Asia' However, it can also refer to what ought to have been done at some time in the past: (Livy) 'what, pray, ought a praetor to have done?' In a conditional clause in reported speech the perfect gerundive infinitive can also refer to something that would have been necessary in some hypothetical situation: (Tacitus) 'and (he said that) there wouldn't be anyone who would have to die sooner than himself if Silius were Emperor' The future gerundive infinitive is made with . It is used in indirect statements to describe something which it is going to be necessary to do: (Curtius) 'and so he had ordered his men to lay out their camp in that very place, delighted that it was going to be necessary to fight the decisive battle in that particular narrow plain' (see Battle of Issus) It can also describe what must inevitably happen at a future time: (Curtius) '(he had written that) a person would inevitably grow old just visiting such a huge country, even without fighting a battle' Participles Compared to Greek, Latin is deficient in participles, having only three, as follows, as well as the gerundive. The Romans themselves considered the gerundive also to be a participle, but most modern grammars treat it as a separate part of speech. The different participles of the verb are shown below: The participles are all verbal adjectives, and so the ending changes according to case, gender, and number. As the table shows, there is no passive present or future participle, and no active past participle. In deponent verbs, however, the Perfect participle is active in meaning, e.g. , 'having set out', 'having tried'. In deponent verbs, the gerundive is usually used in impersonal form and with an active meaning: 'it is necessary to set out', 'it is necessary to die', 'it is necessary to try'; but some deponent verbs have a personal gerundive with a passive sense: 'needing to be encouraged', 'needing to be followed': (Columella) 'a middle course must be followed' The present and future participles of deponent verbs are active in form, e.g. 'dying', 'about to die'. Originally deponent verbs had no present participle and perfect participles such as 'thinking' and 'fearing' were used with a present meaning. The verb 'I am' has no Present or Perfect participle in classical Latin, but only the Future participle 'going to be'. The compound verbs and , however, form the Present participles . The verbs 'I want' and 'I am able' have no future participle. , the present participle of , has a limited use as an adjective meaning 'powerful'. The 3rd and 4th conjugation gerundive in older texts such as Plautus ends with -undus: , , . Such endings are sometimes found even in classical Latin. Later, -endus became usual, but in the verb 'I go', the gerundive is always 'necessary to go'. Like the infinitive, the tenses of the participles are not absolute but relative to the main verb of the sentence. For example, a future participle can refer to an action in the past, provided it is later than the time of the main verb; and similarly the perfect participle can refer to an action in the future, provided it is earlier than the time of the main verb. Present participle The present participle usually describes a condition or an action which is happening at the time of the main verb: (Cicero) 'he was doused with boiling water' (Livy) 'having drawn his sword, he came to Lucretia while she was sleeping' Occasionally, a present participle can refer to an action which takes place immediately before the time of the main verb: (Livy) 'and to her husband, when he asked "are you all right?" she replied "not at all!" ' Present participles of deponent verbs are only very rarely found in early Latin (although Plautus is said to have written a play called 'Those dying together') but they became common later. Perfect participle The perfect participle refers to an action which took place before the time of the main verb, or to the state that something is in as a result of an earlier action: (Eutropius) 'he buried the dead (those who had been killed)' A deponent participle such as 'thinking, reckoning' or 'fearing' can often be translated as if it were present: (Nepos) 'thinking this a suitable time for pursuing his studies, he went to Athens' Future participle The future participle is most commonly used in the periphrastic tenses or in indirect statements (see examples above). 'An examination of the usage of the various authors shows that the form in -ūrus did not reach the full status of a participle till the time of Livy. Up to the time of Caesar and Cicero its use was almost restricted to a combination with the verb , making a periphrastic future tense (Woodcock). Woodcock speculates that the -ūrus ending might originally have been a verbal noun. In later authors the future participle is sometimes used as in Greek to indicate purpose: (Pliny) 'he launched some warships, with a view to bringing help' Bibliography Aerts, Simon (2018). "Tense, Aspect and Aktionsart in Classical Latin: Towards a New Approach". Symbolae Osloenses 92(1):107–149. Andrewes, M. (1937). "Caesar's Use of Tense Sequence in Indirect Speech". The Classical Review, Vol. 51, No. 4 (Sep., 1937), pp. 114–116. Andrewes, M. (1951). "The Function of Tense Variation in the Subjunctive Mood of Oratio Obliqua". The Classical Review, New Series, Vol. 1, No. 3/4 (Dec., 1951), pp. 142–146. De Melo, Wolfgang (2007). "Latin prohibitions and the Origins of the u/w-Perfect and the Type amāstī". Glotta, Bd. 83 (2007), pp. 43–68. De Melo, Wolfgang (2012). "Kuryłowicz's first 'law of analogy' and the development of passive periphrases in Latin". In Philomen Probert & Andreas Willi, Laws and Rules in Indo-European. Oxford, chapter 6, pp. 83–101. Gildersleeve, B. L. & Gonzalez Lodge (1895). Gildersleeve's Latin Grammar. 3rd Edition. (Macmillan) Goodrich, W. J. "On the Prospective Use of the Latin Imperfect Subjunctive in Relative Clauses". The Classical Review, Vol. 31, No. 3/4 (May - Jun., 1917), pp. 83–86. Greenough, J. B. et al. (1903). Allen and Greenough's New Latin Grammar for Schools and Colleges. Boston and London. Haverling, Gerd V.M. (2002). "On the semantic functions of the Latin perfect". Amsterdam Studies in Classical Philology, Volume 10. Haverling, Gerd V.M. (2012). "Actionality, tense, and viewpoint". In Baldi, Philip; Cuzzolin, Pierluigi (eds.) (2012). Constituent Syntax: Adverbial Phrases, Adverbs, Mood, Tense. , pp. 277–524. Kennedy, Benjamin Hall (1871). The Revised Latin Primer. Edited and further revised by Sir James Mountford, Longman 1930; reprinted 1962. Ker, James (2007). "Roman Repraesentatio". The American Journal of Philology, Vol. 128, No. 3 (Autumn, 2007), pp. 341–365. Madvig, J. N. (1842). . In , vol. 2. pp. 218–226. Pinkster, Harm (1987). "The Strategy and Chronology of the Development of Future and Perfect Tense Auxiliaries in Latin". In Martin Harris and Paolo Ramat (eds.) Historical Development of Auxiliaries (Trends in Linguistics. Studies and Monographs, 35). De Gruyter Mouton. Pinkster, Harm (1990), Latin Syntax and Semantics. Chapter 11: The Latin tense system. Postgate, J. P. (1905). "Repraesentatio Temporum in the Oratio Obliqua of Caesar". The Classical Review, Vol. 19, No. 9 (Dec., 1905), pp. 441–446. Powell, J. G. F. (2005). "Cicero's adaptation of legal Latin in the de Legibus". In Reinhardt, T. et al. (eds). Aspects of the Language of Latin Prose. Salmon, E. T. (1931). "A Note on Subordinate Clauses in Oratio Obliqua". The Classical Review, Vol. 45, No. 5 (Nov., 1931), p. 173. Terrell, Glanville (1904). "The Apodosis of the Unreal Condition in Oratio Obliqua in Latin". The American Journal of Philology, Vol. 25, No. 1 (1904), pp. 59–73. Viti, Carlotta (2010). "The non-literal use of tenses in Latin, with particular reference to the praesens historicum". Revue de linguistique latine du Centre Alfred Ernout. (Posted at Zurich Open Repository and Archive, University of Zurich). Wheeler, Arthur Leslie, (1903). "The Imperfect Indicative in Early Latin". The American Journal of Philology, Vol. 24, No. 2 (1903), pp. 163–191. Wigtil, David N. (1992) "Translating Latin Tenses into English". The Classical World, Vol. 85, No. 6 (Jul. - Aug., 1992), pp. 675–686. Woodcock, E.C. (1959), A New Latin Syntax. References External links University of Chicago Perseus under PhiloLogic searchable corpus. Perseus under PhiloLogic home page Online version of Allen & Greenough's Latin Grammar Online version of Gildersleeve & Lodge's Latin Grammar Latin grammar
45635
https://en.wikipedia.org/wiki/Top-down%20and%20bottom-up%20design
Top-down and bottom-up design
Top-down and bottom-up are both strategies of information processing and knowledge ordering, used in a variety of fields including software, humanistic and scientific theories (see systemics), and management and organization. In practice, they can be seen as a style of thinking, teaching, or leadership. A top-down approach (also known as stepwise design and stepwise refinement and in some cases used as a synonym of decomposition) is essentially the breaking down of a system to gain insight into its compositional sub-systems in a reverse engineering fashion. In a top-down approach an overview of the system is formulated, specifying, but not detailing, any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top-down model is often specified with the assistance of "black boxes", which makes it easier to manipulate. However, black boxes may fail to clarify elementary mechanisms or be detailed enough to realistically validate the model. Top down approach starts with the big picture. It breaks down from there into smaller segments. A bottom-up approach is the piecing together of systems to give rise to more complex systems, thus making the original systems sub-systems of the emergent system. Bottom-up processing is a type of information processing based on incoming data from the environment to form a perception. From a cognitive psychology perspective, information enters the eyes in one direction (sensory input, or the "bottom"), and is then turned into an image by the brain that can be interpreted and recognized as a perception (output that is "built up" from processing to final cognition). In a bottom-up approach the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small but eventually grow in complexity and completeness. However, "organic strategies" may result in a tangle of elements and subsystems, developed in isolation and subject to local optimization as opposed to meeting a global purpose. Product design and development During the design and development of new products, designers and engineers rely on both a bottom-up and top-down approach. The bottom-up approach is being utilized when off-the-shelf or existing components are selected and integrated into the product. An example would include selecting a particular fastener, such as a bolt, and designing the receiving components such that the fastener will fit properly. In a top-down approach, a custom fastener would be designed such that it would fit properly in the receiving components. For perspective, for a product with more restrictive requirements (such as weight, geometry, safety, environment, etc.), such as a space-suit, a more top-down approach is taken and almost everything is custom designed. Computer science Software development Part of this section is from the Perl Design Patterns Book. In the software development process, the top-down and bottom-up approaches play a key role. Top-down approaches emphasize planning and a complete understanding of the system. It is inherent that no coding can begin until a sufficient level of detail has been reached in the design of at least some part of the system. Top-down approaches are implemented by attaching the stubs in place of the module. This, however, delays testing of the ultimate functional units of a system until significant design is complete. Bottom-up emphasizes coding and early testing, which can begin as soon as the first module has been specified. This approach, however, runs the risk that modules may be coded without having a clear idea of how they link to other parts of the system, and that such linking may not be as easy as first thought. Re-usability of code is one of the main benefits of the bottom-up approach. Top-down design was promoted in the 1970s by IBM researchers Harlan Mills and Niklaus Wirth. Mills developed structured programming concepts for practical use and tested them in a 1969 project to automate the New York Times morgue index. The engineering and management success of this project led to the spread of the top-down approach through IBM and the rest of the computer industry. Among other achievements, Niklaus Wirth, the developer of Pascal programming language, wrote the influential paper Program Development by Stepwise Refinement. Since Niklaus Wirth went on to develop languages such as Modula and Oberon (where one could define a module before knowing about the entire program specification), one can infer that top-down programming was not strictly what he promoted. Top-down methods were favored in software engineering until the late 1980s, and object-oriented programming assisted in demonstrating the idea that both aspects of top-down and bottom-up programming could be utilized. Modern software design approaches usually combine both top-down and bottom-up approaches. Although an understanding of the complete system is usually considered necessary for good design, leading theoretically to a top-down approach, most software projects attempt to make use of existing code to some degree. Pre-existing modules give designs a bottom-up flavor. Some design approaches also use an approach where a partially functional system is designed and coded to completion, and this system is then expanded to fulfill all the requirements for the project. Programming Top-down is a programming style, the mainstay of traditional procedural languages, in which design begins by specifying complex pieces and then dividing them into successively smaller pieces. The technique for writing a program using top–down methods is to write a main procedure that names all the major functions it will need. Later, the programming team looks at the requirements of each of those functions and the process is repeated. These compartmentalized sub-routines eventually will perform actions so simple they can be easily and concisely coded. When all the various sub-routines have been coded the program is ready for testing. By defining how the application comes together at a high level, lower level work can be self-contained. By defining how the lower level abstractions are expected to integrate into higher level ones, interfaces become clearly defined. In a bottom-up approach, the individual base elements of the system are first specified in great detail. These elements are then linked together to form larger subsystems, which then in turn are linked, sometimes in many levels, until a complete top-level system is formed. This strategy often resembles a "seed" model, by which the beginnings are small, but eventually grow in complexity and completeness. Object-oriented programming (OOP) is a paradigm that uses "objects" to design applications and computer programs. In mechanical engineering with software programs such as Pro/ENGINEER, Solidworks, and Autodesk Inventor users can design products as pieces not part of the whole and later add those pieces together to form assemblies like building with Lego. Engineers call this piece part design. In a bottom-up approach, good intuition is necessary to decide the functionality that is to be provided by the module. If a system is to be built from an existing system, this approach is more suitable as it starts from some existing modules. Parsing Parsing is the process of analyzing an input sequence (such as that read from a file or a keyboard) in order to determine its grammatical structure. This method is used in the analysis of both natural languages and computer languages, as in a compiler. Bottom-up parsing is a strategy for analyzing unknown data relationships that attempts to identify the most fundamental units first, and then to infer higher-order structures from them. Top-down parsers, on the other hand, hypothesize general parse tree structures and then consider whether the known fundamental structures are compatible with the hypothesis. See Top-down parsing and Bottom-up parsing. Nanotechnology Top-down and bottom-up are two approaches for the manufacture of products. These terms were first applied to the field of nanotechnology by the Foresight Institute in 1989 in order to distinguish between molecular manufacturing (to mass-produce large atomically precise objects) and conventional manufacturing (which can mass-produce large objects that are not atomically precise). Bottom-up approaches seek to have smaller (usually molecular) components built up into more complex assemblies, while top-down approaches seek to create nanoscale devices by using larger, externally controlled ones to direct their assembly. Certain valuable nanostructures, such as Silicon nanowires, can be fabricated using either approach, with processing methods selected on the basis of targeted applications. The top-down approach often uses the traditional workshop or microfabrication methods where externally controlled tools are used to cut, mill, and shape materials into the desired shape and order. Micropatterning techniques, such as photolithography and inkjet printing belong to this category. Vapor treatment can be regarded as a new top-down secondary approaches to engineer nanostructures. Bottom-up approaches, in contrast, use the chemical properties of single molecules to cause single-molecule components to (a) self-organize or self-assemble into some useful conformation, or (b) rely on positional assembly. These approaches utilize the concepts of molecular self-assembly and/or molecular recognition. See also Supramolecular chemistry. Such bottom-up approaches should, broadly speaking, be able to produce devices in parallel and much cheaper than top-down methods, but could potentially be overwhelmed as the size and complexity of the desired assembly increases. Neuroscience and psychology These terms are also employed in neuroscience, cognitive neuroscience and cognitive psychology to discuss the flow of information in processing. Typically sensory input is considered "bottom-up", and higher cognitive processes, which have more information from other sources, are considered "top-down". A bottom-up process is characterized by an absence of higher level direction in sensory processing, whereas a top-down process is characterized by a high level of direction of sensory processing by more cognition, such as goals or targets (Biederman, 19). According to college teaching notes written by Charles Ramskov, Rock, Neiser, and Gregory claim that top-down approach involves perception that is an active and constructive process. Additionally, it is an approach not directly given by stimulus input, but is the result of stimulus, internal hypotheses, and expectation interactions. According to Theoretical Synthesis, "when a stimulus is presented short and clarity is uncertain that gives a vague stimulus, perception becomes a top-down approach." Conversely, psychology defines bottom-up processing as an approach wherein there is a progression from the individual elements to the whole. According to Ramskov, one proponent of bottom-up approach, Gibson, claims that it is a process that includes visual perception that needs information available from proximal stimulus produced by the distal stimulus. Theoretical Synthesis also claims that bottom-up processing occurs "when a stimulus is presented long and clearly enough." Cognitively speaking, certain cognitive processes, such as fast reactions or quick visual identification, are considered bottom-up processes because they rely primarily on sensory information, whereas processes such as motor control and directed attention are considered top-down because they are goal directed. Neurologically speaking, some areas of the brain, such as area V1 mostly have bottom-up connections. Other areas, such as the fusiform gyrus have inputs from higher brain areas and are considered to have top-down influence. The study of visual attention provides an example. If your attention is drawn to a flower in a field, it may be because the color or shape of the flower are visually salient. The information that caused you to attend to the flower came to you in a bottom-up fashion—your attention was not contingent upon knowledge of the flower; the outside stimulus was sufficient on its own. Contrast this situation with one in which you are looking for a flower. You have a representation of what you are looking for. When you see the object you are looking for, it is salient. This is an example of the use of top-down information. In cognitive terms, two thinking approaches are distinguished. "Top-down" (or "big chunk") is stereotypically the visionary, or the person who sees the larger picture and overview. Such people focus on the big picture and from that derive the details to support it. "Bottom-up" (or "small chunk") cognition is akin to focusing on the detail primarily, rather than the landscape. The expression "seeing the wood for the trees" references the two styles of cognition. Management and organization In the fields of management and organization, the terms "top-down" and "bottom-up" are used to describe how decisions are made and/or how change is implemented. A "top-down" approach is where an executive decision maker or other top person makes the decisions of how something should be done. This approach is disseminated under their authority to lower levels in the hierarchy, who are, to a greater or lesser extent, bound by them. For example, when wanting to make an improvement in a hospital, a hospital administrator might decide that a major change (such as implementing a new program) is needed, and then the leader uses a planned approach to drive the changes down to the frontline staff (Stewart, Manges, Ward, 2015). A "bottom-up" approach to changes is one that works from the grassroots—from a large number of people working together, causing a decision to arise from their joint involvement. A decision by a number of activists, students, or victims of some incident to take action is a "bottom-up" decision. A bottom-up approach can be thought of as "an incremental change approach that represents an emergent process cultivated and upheld primarily by frontline workers" (Stewart, Manges, Ward, 2015, p. 241). Positive aspects of top-down approaches include their efficiency and superb overview of higher levels. Also, external effects can be internalized. On the negative side, if reforms are perceived to be imposed 'from above', it can be difficult for lower levels to accept them (e.g. Bresser-Pereira, Maravall, and Przeworski 1993). Evidence suggests this to be true regardless of the content of reforms (e.g. Dubois 2002). A bottom-up approach allows for more experimentation and a better feeling for what is needed at the bottom. Other evidence suggests that there is a third combination approach to change (see Stewart, Manges, Ward, 2015). Public health Both top-down and bottom-up approaches exist in public health. There are many examples of top-down programs, often run by governments or large inter-governmental organizations (IGOs); many of these are disease-specific or issue-specific, such as HIV control or Smallpox Eradication. Examples of bottom-up programs include many small NGOs set up to improve local access to healthcare. However, a lot of programs seek to combine both approaches; for instance, guinea worm eradication, a single-disease international program currently run by the Carter Center has involved the training of many local volunteers, boosting bottom-up capacity, as have international programs for hygiene, sanitation, and access to primary health-care. Architecture Often, the École des Beaux-Arts school of design is said to have primarily promoted top-down design because it taught that an architectural design should begin with a parti, a basic plan drawing of the overall project. By contrast, the Bauhaus focused on bottom-up design. This method manifested itself in the study of translating small-scale organizational systems to a larger, more architectural scale (as with the wood panel carving and furniture design). Ecology In ecology, top-down control refers to when a top predator controls the structure or population dynamics of the ecosystem. The interactions between these top predators and their prey is what influences lower trophic levels. Changes in the top level of trophic levels have an inverse effect on the lower trophic levels. Top-down control can have negative effects on the surrounding ecosystem if there is a drastic change in the number of predators. The classic example is of kelp forest ecosystems. In such ecosystems, sea otters are a keystone predator. They prey on urchins which in turn eat kelp. When otters are removed, urchin populations grow and reduce the kelp forest creating urchin barrens. This reduces the diversity of the ecosystem as a whole and can detrimental effects on all of the other organisms. In other words, such ecosystems are not controlled by productivity of the kelp, but rather, a top predator. One can see the inverse effect that top-down control has in this example; when the population of otters decreased, the population of the urchins increased. Bottom-up control in ecosystems refers to ecosystems in which the nutrient supply, productivity, and type of primary producers (plants and phytoplankton) control the ecosystem structure. If there are not enough resources or producers in the ecosystem, there is not enough energy left for the rest of the animals in the food chain because of biomagnification and the ecological efficiency. An example would be how plankton populations are controlled by the availability of nutrients. Plankton populations tend to be higher and more complex in areas where upwelling brings nutrients to the surface. There are many different examples of these concepts. It is common for populations to be influenced by both types of control, and there is still debates going on as to which type of control affects food webs in certain ecosystems. Philosophy and ethics Top-down reasoning in ethics is when the reasoner starts from abstract universalisable principles and then reasons down them to particular situations. Bottom-up reasoning occurs when the reasoner starts from intuitive particular situational judgements and then reasons up to principles. Reflective equilibrium occurs when there is interaction between top-down and bottom-up reasoning until both are in harmony. That is to say, when universalisable abstract principles are reflectively found to be in equilibrium with particular intuitive judgements. The process occurs when cognitive dissonance occurs when reasoners try to resolve top-down with bottom-up reasoning, and adjust one or the other, until they are satisfied they have found the best combinations of principles and situational judgements. See also The Cathedral and the Bazaar Pseudocode References cited https://philpapers.org/rec/COHTNO Citations and notes Further reading Luiz Carlos Bresser-Pereira, José María Maravall, and Adam Przeworski, 1993. Economic reforms in new democracies. Cambridge: Cambridge University Press. . Dubois, Hans F.W. 2002. Harmonization of the European vaccination policy and the role TQM and reengineering could play. Quality Management in Health Care 10(2): 47–57. J. A. Estes, M. T. Tinker, T. M. Williams, D. F. Doak "Killer Whale Predation on Sea Otters Linking Oceanic and Nearshore Ecosystems", Science, October 16, 1998: Vol. 282. no. 5388, pp. 473 – 476 Galotti, K. (2008). Cognitive Psychology: In and out of the laboratory. USA: Wadsworth. Goldstein, E.B. (2010). Sensation and Perception. USA: Wadsworth. External links "Program Development by Stepwise Refinement", Communications of the ACM, Vol. 14, No. 4, April (1971) Integrated Parallel Bottom-up and Top-down Approach. In Proceedings of the International Emergency Management Society's Fifth Annual Conference (TIEMS 98), May 19–22, Washington DC, USA (1998). Changing Your Mind: On the Contributions of Top-Down and Bottom-Up Guidance in Visual Search for Feature Singletons, Journal of Experimental Psychology: Human Perception and Performance, Vol. 29, No. 2, 483–502, 2003. K. Eric Drexler and Christine Peterson, Nanotechnology and Enabling Technologies, Foresight Briefing No. 2, 1989. Empowering sustained patient safety: the benefits of combining top-down and bottom-up approaches Dichotomies Information science Neuropsychology Software design Management science Hierarchy
118450
https://en.wikipedia.org/wiki/Innovation
Innovation
Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a new or changed entity realizing or redistributing value". Others have different definitions; a common element in the definitions is a focus on newness, improvement, and spread of ideas or technologies. Innovation often takes place through the development of more-effective products, processes, services, technologies, art works or business models that innovators make available to markets, governments and society. Innovation is related to, but not the same as, invention: innovation is more apt to involve the practical implementation of an invention (i.e. new / improved ability) to make a meaningful impact in a market or society, and not all innovations require a new invention. Technical innovation often manifests itself via the engineering process when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation. Definition Surveys of the literature on innovation have found a variety of definitions. In 2009, Baregheh et al. found around 60 definitions in different scientific papers, while a 2014 survey found over 40. Based on their survey, Baragheh et al. attempted to define a multidisciplinary definition and arrived at the following definition:"Innovation is the multi-stage process whereby organizations transform ideas into new/improved products, service or processes, in order to advance, compete and differentiate themselves successfully in their marketplace" In an industrial survey of how the software industry defined innovation, the following definition given by Crossan and Apaydin was considered to be the most complete, which builds on the Organisation for Economic Co-operation and Development (OECD) manual's definition: American sociologist Everett Rogers, defined it as follows:"An idea, practice, or object that is perceived as new by an individual or other unit of adoption" According to Alan Altshuler and Robert D. Behn, innovation includes original invention and creative use and defines innovation as a generation, admission and realization of new ideas, products, services and processes. Two main dimensions of innovation are degree of novelty (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and kind of innovation (i.e. whether it is processor product-service system innovation). In organizational scholarship, researchers have also distinguished innovation to be separate from creativity, by providing an updated definition of these two related constructs: Peter Drucker wrote: Creativity and innovation In general, innovation is distinguished from creativity by its emphasis on the implementation of creative ideas in an economic setting. Amabile and Pratt in 2016, drawing on the literature, distinguish between creativity ("the production of novel and useful ideas by an individual or small group of individuals working together") and innovation ("the successful implementation of creative ideas within an organization"). Types Several frameworks have been proposed for defining types of innovation. Sustaining vs disruptive innovation One framework proposed by Clayton Christensen draws a distinction between sustaining and disruptive innovations. Sustaining innovation is the improvement of a product or service based on the known needs of current customers (e.g. faster microprocessors, flat screen televisions). Disruptive innovation in contrast refers to a process by which a new product or service creates a new market (e.g. transistor radio, free crowdsourced encyclopedia, etc.), eventually displacing established competitors. According to Christensen, disruptive innovations are critical to long-term success in business. Disruptive innovation is often enabled by disruptive technology. Marco Iansiti and Karim R. Lakhani define foundational technology as having the potential to create new foundations for global technology systems over the longer term. Foundational technology tends to transform business operating models as entirely new business models emerge over many years, with gradual and steady adoption of the innovation leading to waves of technological and institutional change that gain momentum more slowly. The advent of the packet-switched communication protocol TCP/IP—originally introduced in 1972 to support a single use case for United States Department of Defense electronic communication (email), and which gained widespread adoption only in the mid-1990s with the advent of the World Wide Web—is a foundational technology. Four types model Another framework was suggested by Henderson and Clark. They divide innovation into four types; Radical innovation: "establishes a new dominant design and, hence, a new set of core design concepts embodied in components that are linked together in a new architecture." (p. 11) Incremental innovation: "refines and extends an established design. Improvement occurs in individual components, but the underlying core design concepts, and the links between them, remain the same." (p. 11) Architectural innovation: "innovation that changes only the relationships between them [the core design concepts]" (p. 12) Modular Innovation: "innovation that changes only the core design concepts of a technology" (p. 12) While Henderson and Clark as well as Christensen talk about technical innovation there are other kinds of innovation as well, such as service innovation and organizational innovation. Non-economic innovation The classical definition of innovation being limited to the primary goal of generating profit for a firm, has led others to define other types of innovation such as: social innovation, sustainable or green innovation, and responsible innovation. History The word "innovation" once had a quite different meaning. The first full-length discussion about innovation is the account by the Greek philosopher and historian Xenophon (430–355 BCE). He viewed the concept as multifaceted and connected it to political action. The word for innovation that he uses, 'kainotomia', had previously occurred in two plays by Aristophanes ( – BCE). Plato (died BCE) discussed innovation in his Laws dialogue and was not very fond of the concept. He was skeptical to it both in culture (dancing and art) and in education (he did not believe in introducing new games and toys to the kids). Aristotle (384–322 BCE) did not like organizational innovations: he believed that all possible forms of organization had been discovered. Before the 4th century in Rome, the words novitas and res nova / nova res were used with either negative or positive judgment on the innovator. This concept meant "renewing" and was incorporated into the new Latin verb word innovo ("I renew" or "I restore") in the centuries that followed. The Vulgate version of the Bible (late 4th century CE) used the word in spiritual as well as political contexts. It also appeared in poetry, mainly with spiritual connotations, but was also connected to political, material and cultural aspects. Machiavelli's The Prince (1513), discusses innovation in a political setting. Machiavelli portrays it as a strategy a Prince may employ in order to cope with a constantly changing world as well as the corruption within it. Here innovation is described as introducing change in government (new laws and institutions); Machiavelli's later book The Discourses (1528) characterises innovation as imitation, as a return to the original that has been corrupted by people and by time. Thus for Machiavelli innovation came with positive connotations. This is however an exception in the usage of the concept of innovation from the 16th century and onward. No innovator from the renaissance until the late 19th century ever thought of applying the word innovator upon themselves, it was a word used to attack enemies. From the 1400s through the 1600s, the concept of innovation was pejorative – the term was an early-modern synonym for "rebellion", "revolt" and "heresy". In the 1800s people promoting capitalism saw socialism as an innovation and spent a lot of energy working against it. For instance, Goldwin Smith (1823-1910) saw the spread of social innovations as an attack on money and banks. These social innovations were socialism, communism, nationalization, cooperative associations. In the 20th century the concept of innovation did not become popular until after the Second World War of 1939-1945. This is the point in time when people started to talk about technological product innovation and tie it to the idea of economic growth and competitive advantage. Joseph Schumpeter (1883–1950) is often credited as the one who made the term popular - he contributed greatly to the study of innovation economics, In business and in economics, innovation can provide a catalyst for growth in an enterprise or even in an industry. With rapid advances in transportation and communications over the past few decades, the old concepts of factor endowments and comparative advantage which focused on an area's unique inputs are outmoded in today's global economy. Schumpeter argued that industries must incessantly revolutionize the economic structure from within, that is: innovate with better or more effective processes and products, as well as with market distribution (such as the transition from the craft shop to factory). He famously asserted that "creative destruction is the essential fact about capitalism". Entrepreneurs continuously search for better ways to satisfy their consumer base with improved quality, durability, service and price - searches which may come to fruition in innovation with advanced technologies and organizational strategies. A prime example of innovation involved the boom of Silicon Valley start-ups out of the Stanford Industrial Park. In 1957, dissatisfied employees of Shockley Semiconductor, the company of Nobel laureate and co-inventor of the transistor William Shockley, left to form an independent firm, Fairchild Semiconductor. After several years, Fairchild developed into a formidable presence in the sector. Eventually, these founders left to start their own companies based on their own unique ideas, and then leading employees started their own firms. Over the next 20 years this process resulted in the momentous startup-company explosion of information-technology firms. Silicon Valley began as 65 new enterprises born out of Shockley's eight former employees. Another example involves business incubators – a phenomenon introduced in 1959 and subsequently nurtured by governments around the world. Such "incubators", located close to knowledge clusters (mostly research-based) like universities or other government excellence centres – aim primarily to channel generated knowledge to applied innovation outcomes in order to stimulate regional or national economic growth. In the 21st century the Islamic State (IS) movement, while decrying religious innovations, has innovated in military tactics, recruitment, ideology and geopolitical activity. Process of innovation An early model included only three phases of innovation. According to Utterback (1971), these phases were: 1) idea generation, 2) problem solving, and 3) implementation. By the time one completed phase 2, one had an invention, but until one got it to the point of having an economic impact, one didn't have an innovation. Diffusion wasn't considered a phase of innovation. Focus at this point in time was on manufacturing. All organizations can innovate, including for example hospitals, universities, and local governments. The organization requires a proper structure in order to retain competitive advantage. Organizations can also improve profits and performance by providing work groups opportunities and resources to innovate, in addition to employee's core job tasks. Executives and managers have been advised to break away from traditional ways of thinking and use change to their advantage. The world of work is changing with the increased use of technology and companies are becoming increasingly competitive. Companies will have to downsize or reengineer their operations to remain competitive. This will affect employment as businesses will be forced to reduce the number of people employed while accomplishing the same amount of work if not more. For instance, former Mayor Martin O’Malley pushed the City of Baltimore to use CitiStat, a performance-measurement data and management system that allows city officials to maintain statistics on several areas from crime trends to the conditions of potholes. This system aided in better evaluation of policies and procedures with accountability and efficiency in terms of time and money. In its first year, CitiStat saved the city $13.2 million. Even mass transit systems have innovated with hybrid bus fleets to real-time tracking at bus stands. In addition, the growing use of mobile data terminals in vehicles, that serve as communication hubs between vehicles and a control center, automatically send data on location, passenger counts, engine performance, mileage and other information. This tool helps to deliver and manage transportation systems. Still other innovative strategies include hospitals digitizing medical information in electronic medical records. For example, the U.S. Department of Housing and Urban Development's HOPE VI initiatives turned severely distressed public housing in urban areas into revitalized, mixed-income environments; the Harlem Children’s Zone used a community-based approach to educate local area children; and the Environmental Protection Agency's brownfield grants facilitates turning over brownfields for environmental protection, green spaces, community and commercial development. Sources of innovation Innovation may occur due to effort from a range of different agents, by chance, or as a result of a major system failure. According to Peter F. Drucker, the general sources of innovations are changes in industry structure, in market structure, in local and global demographics, in human perception, in the amount of available scientific knowledge, etc. In the simplest linear model of innovation the traditionally recognized source is manufacturer innovation. This is where an agent (person or business) innovates in order to sell the innovation. Specifically, R&D measurement is the commonly used input for innovation, in particular in the business sector, named Business Expenditure on R&D (BERD) that grew over the years on the expenses of the declining R&D invested by the public sector. Another source of innovation, only now becoming widely recognized, is end-user innovation. This is where an agent (person or company) develops an innovation for their own (personal or in-house) use because existing products do not meet their needs. MIT economist Eric von Hippel has identified end-user innovation as, by far, the most important and critical in his classic book on the subject, "The Sources of Innovation". The robotics engineer Joseph F. Engelberger asserts that innovations require only three things: a recognized need competent people with relevant technology financial support The Kline chain-linked model of innovation places emphasis on potential market needs as drivers of the innovation process, and describes the complex and often iterative feedback loops between marketing, design, manufacturing, and R&D. Facilitating innovation Innovation by businesses is achieved in many ways, with much attention now given to formal research and development (R&D) for "breakthrough innovations". R&D help spur on patents and other scientific innovations that leads to productive growth in such areas as industry, medicine, engineering, and government. Yet, innovations can be developed by less formal on-the-job modifications of practice, through exchange and combination of professional experience and by many other routes. Investigation of relationship between the concepts of innovation and technology transfer revealed overlap. The more radical and revolutionary innovations tend to emerge from R&D, while more incremental innovations may emerge from practice – but there are many exceptions to each of these trends. Information technology and changing business processes and management style can produce a work climate favorable to innovation. For example, the software tool company Atlassian conducts quarterly "ShipIt Days" in which employees may work on anything related to the company's products. Google employees work on self-directed projects for 20% of their time (known as Innovation Time Off). Both companies cite these bottom-up processes as major sources for new products and features. An important innovation factor includes customers buying products or using services. As a result, organizations may incorporate users in focus groups (user centered approach), work closely with so-called lead users (lead user approach), or users might adapt their products themselves. The lead user method focuses on idea generation based on leading users to develop breakthrough innovations. U-STIR, a project to innovate Europe's surface transportation system, employs such workshops. Regarding this user innovation, a great deal of innovation is done by those actually implementing and using technologies and products as part of their normal activities. Sometimes user-innovators may become entrepreneurs, selling their product, they may choose to trade their innovation in exchange for other innovations, or they may be adopted by their suppliers. Nowadays, they may also choose to freely reveal their innovations, using methods like open source. In such networks of innovation the users or communities of users can further develop technologies and reinvent their social meaning. One technique for innovating a solution to an identified problem is to actually attempt an experiment with many possible solutions. This technique was famously used by Thomas Edison's laboratory to find a version of the incandescent light bulb economically viable for home use, which involved searching through thousands of possible filament designs before settling on carbonized bamboo. This technique is sometimes used in pharmaceutical drug discovery. Thousands of chemical compounds are subjected to high-throughput screening to see if they have any activity against a target molecule which has been identified as biologically significant to a disease. Promising compounds can then be studied; modified to improve efficacy and reduce side effects, evaluated for cost of manufacture; and if successful turned into treatments. The related technique of A/B testing is often used to help optimize the design of web sites and mobile apps. This is used by major sites such as amazon.com, Facebook, Google, and Netflix. Procter & Gamble uses computer-simulated products and online user panels to conduct larger numbers of experiments to guide the design, packaging, and shelf placement of consumer products. Capital One uses this technique to drive credit card marketing offers. Goals and failures Programs of organizational innovation are typically tightly linked to organizational goals and objectives, to the business plan, and to market competitive positioning. One driver for innovation programs in corporations is to achieve growth objectives. As Davila et al. (2006) notes, "Companies cannot grow through cost reduction and reengineering alone... Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results". One survey across a large number of manufacturing and services organizations found, ranked in decreasing order of popularity, that systematic programs of organizational innovation are most frequently driven by: improved quality, creation of new markets, extension of the product range, reduced labor costs, improved production processes, reduced materials, reduced environmental damage, replacement of products/services, reduced energy consumption, conformance to regulations. These goals vary between improvements to products, processes and services and dispel a popular myth that innovation deals mainly with new product development. According to Andrea Vaona and Mario Pianta, some example goals of innovation could stem from two different types of technological strategies: technological competitiveness and active price competitiveness. Technological competitiveness may have a tendency to be pursued by smaller firms and can be characterized as "efforts for market-oriented innovation, such as a strategy of market expansion and patenting activity." On the other hand, active price competitiveness is geared toward process innovations that lead to efficiency and flexibility, which tend to be pursued by large, established firms as they seek to expand their market foothold. Most of the goals could apply to any organization be it a manufacturing facility, marketing company, hospital or government. Whether innovation goals are successfully achieved or otherwise depends greatly on the environment prevailing in the organization. Conversely, failure can develop in programs of innovations. The causes of failure have been widely researched and can vary considerably. Some causes will be external to the organization and outside its influence of control. Others will be internal and ultimately within the control of the organization. Internal causes of failure can be divided into causes associated with the cultural infrastructure and causes associated with the innovation process itself. Common causes of failure within the innovation process in most organizations can be distilled into five types: poor goal definition, poor alignment of actions to goals, poor participation in teams, poor monitoring of results, poor communication and access to information. Diffusion Diffusion of innovation research was first started in 1903 by seminal researcher Gabriel Tarde, who first plotted the S-shaped diffusion curve. Tarde defined the innovation-decision process as a series of steps that include: knowledge forming an attitude a decision to adopt or reject implementation and use confirmation of the decision Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been proposed that the lifecycle of innovations can be described using the 's-curve' or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point, customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its lifecycle, growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return. The s-curve derives from an assumption that new products are likely to have "product life" – i.e., a start-up phase, a rapid increase in revenue and eventual decline. In fact, the great majority of innovations never get off the bottom of the curve, and never produce normal returns. Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that currently yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors. Measures Measuring innovation is inherently difficult as it implies commensurability so that comparisons can be made in quantitative terms. Innovation, however, is by definition novelty. Comparisons are thus often meaningless across products or service. Nevertheless, Edison et al. in their review of literature on innovation management found 232 innovation metrics. They categorized these measures along five dimensions; i.e. inputs to the innovation process, output from the innovation process, effect of the innovation output, measures to access the activities in an innovation process and availability of factors that facilitate such a process. There are two different types of measures for innovation: the organizational level and the political level. Organizational-level The measure of innovation at the organizational level relates to individuals, team-level assessments, and private companies from the smallest to the largest company. Measure of innovation for organizations can be conducted by surveys, workshops, consultants, or internal benchmarking. There is today no established general way to measure organizational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations. Political-level For the political level, measures of innovation are more focused on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks, such as those of the European Foundation for Quality Management. The OECD Oslo Manual (1992) suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo Manual from 2018 takes a wider perspective to innovation, and includes marketing and organizational innovation. These standards are used for example in the European Community Innovation Surveys. Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. The traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3% of GDP. Indicators Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is ignored and measurements and research about it rarely done. For example, an institution may be high tech with the latest equipment, but lacks crucial doing, using and interacting tasks important for innovation. A common industry view (unsupported by empirical evidence) is that comparative cost-effectiveness research is a form of price control which reduces returns to industry, and thus limits R&D expenditure, stifles future innovation and compromises new products access to markets. Some academics claim cost-effectiveness research is a valuable value-based measure of innovation which accords "truly significant" therapeutic advances (i.e. providing "health gain") higher prices than free market mechanisms. Such value-based pricing has been viewed as a means of indicating to industry the type of innovation that should be rewarded from the public purse. An Australian academic developed the case that national comparative cost-effectiveness analysis systems should be viewed as measuring "health innovation" as an evidence-based policy concept for valuing innovation distinct from valuing through competitive markets, a method which requires strong anti-trust laws to be effective, on the basis that both methods of assessing pharmaceutical innovations are mentioned in annex 2C.1 of the Australia-United States Free Trade Agreement. Indices Several indices attempt to measure innovation and rank entities based on these measures, such as: Bloomberg Innovation Index "Bogota Manual" similar to the Oslo Manual, is focused on Latin America and the Caribbean countries. "Creative Class" developed by Richard Florida EIU Innovation Ranking Global Competitiveness Report Global Innovation Index (GII), by INSEAD Information Technology and Innovation Foundation (ITIF) Index Innovation 360 – From the World Bank. Aggregates innovation indicators (and more) from a number of different public sources Innovation Capacity Index (ICI) published by a large number of international professors working in a collaborative fashion. The top scorers of ICI 2009–2010 were: 1. Sweden 82.2; 2. Finland 77.8; and 3. United States 77.5 Innovation Index, developed by the Indiana Business Research Center, to measure innovation capacity at the county or regional level in the United States Innovation Union Scoreboard innovationsindikator for Germany, developed by the Federation of German Industries (Bundesverband der Deutschen Industrie) in 2005 INSEAD Innovation Efficacy Index International Innovation Index, produced jointly by The Boston Consulting Group, the National Association of Manufacturers (NAM) and its nonpartisan research affiliate The Manufacturing Institute, is a worldwide index measuring the level of innovation in a country; NAM describes it as the "largest and most comprehensive global index of its kind" Management Innovation Index – Model for Managing Intangibility of Organizational Creativity: Management Innovation Index NYCEDC Innovation Index, by the New York City Economic Development Corporation, tracks New York City's "transformation into a center for high-tech innovation. It measures innovation in the City's growing science and technology industries and is designed to capture the effect of innovation on the City's economy" OECD Oslo Manual is focused on North America, Europe, and other rich economies State Technology and Science Index, developed by the Milken Institute, is a U.S.-wide benchmark to measure the science and technology capabilities that furnish high paying jobs based around key components World Competitiveness Scoreboard Rankings Common areas of focus include: high-tech companies, manufacturing, patents, post secondary education, research and development, and research personnel. The left ranking of the top 10 countries below is based on the 2020 Bloomberg Innovation Index. However, studies may vary widely; for example the Global Innovation Index 2016 ranks Switzerland as number one wherein countries like South Korea, Japan, and China do not even make the top ten. Rate of innovation In 2005 Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center, argued on the basis of both U.S. patents and world technological breakthroughs, per capita, that the rate of human technological innovation peaked in 1873 and has been slowing ever since. In his article, he asked "Will the level of technology reach a maximum and then decline as in the Dark Ages?" In later comments to New Scientist magazine, Huebner clarified that while he believed that we will reach a rate of innovation in 2024 equivalent to that of the Dark Ages, he was not predicting the reoccurrence of the Dark Ages themselves. John Smart criticized the claim and asserted that technological singularity researcher Ray Kurzweil and others showed a "clear trend of acceleration, not deceleration" when it came to innovations. The foundation replied to Huebner the journal his article was published in, citing Second Life and eHarmony as proof of accelerating innovation; to which Huebner replied. However, Huebner's findings were confirmed in 2010 with U.S. Patent Office data. and in a 2012 paper. Innovation and development The theme of innovation as a tool to disrupting patterns of poverty has gained momentum since the mid-2000s among major international development actors such as DFID, Gates Foundation's use of the Grand Challenge funding model, and USAID's Global Development Lab. Networks have been established to support innovation in development, such as D-Lab at MIT. Investment funds have been established to identify and catalyze innovations in developing countries, such as DFID's Global Innovation Fund, Human Development Innovation Fund, and (in partnership with USAID) the Global Development Innovation Ventures. The United States has to continue to play on the same level of playing field as its competitors in federal research. This can be achieved being strategically innovative through investment in basic research and science". Government policies Given its effects on efficiency, quality of life, and productive growth, innovation is a key driver in improving society and economy. Consequently, policymakers have worked to develop environments that will foster innovation, from funding research and development to establishing regulations that do not inhibit innovation, funding the development of innovation clusters, and using public purchasing and standardisation to 'pull' innovation through. For instance, experts are advocating that the U.S. federal government launch a National Infrastructure Foundation, a nimble, collaborative strategic intervention organization that will house innovations programs from fragmented silos under one entity, inform federal officials on innovation performance metrics, strengthen industry-university partnerships, and support innovation economic development initiatives, especially to strengthen regional clusters. Because clusters are the geographic incubators of innovative products and processes, a cluster development grant program would also be targeted for implementation. By focusing on innovating in such areas as precision manufacturing, information technology, and clean energy, other areas of national concern would be tackled including government debt, carbon footprint, and oil dependence. The U.S. Economic Development Administration understand this reality in their continued Regional Innovation Clusters initiative. The United States also has to integrate her supply-chain and improve her applies research capability and downstream process innovation. Many countries recognize the importance of innovation including Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT); Germany's Federal Ministry of Education and Research; and the Ministry of Science and Technology in the People's Republic of China. Russia's innovation programme is the Medvedev modernisation programme which aims to create a diversified economy based on high technology and innovation. The Government of Western Australia has established a number of innovation incentives for government departments. Landgate was the first Western Australian government agency to establish its Innovation Program. Some regions have taken a proactive role in supporting innovation. Many regional governments are setting up innovation agencies to strengthen regional capabilities. In 2009, the municipality of Medellin, Colombia created Ruta N to transform the city into a knowledge city. See also Communities of innovation Creative problem solving Diffusion (anthropology) Ecoinnovation Hype cycle Induced innovation Information revolution Innovation leadership Innovation system International Association of Innovation Professionals ISO 56000 Knowledge economy Obsolescence Open Innovation Open Innovations (Forum and Technology Show) Outcome-Driven Innovation Participatory design Product innovation Pro-innovation bias Sustainable Development Goals (Agenda 9) Technology Life Cycle Technological innovation system Theories of technology Timeline of historic inventions Toolkits for User Innovation UNDP Innovation Facility Virtual product development References Further reading Bloom, Nicholas, Charles I. Jones, John Van Reenen, and Michael Webb. 2020. "Are Ideas Getting Harder to Find?", American Economic Review, 110 (4): 1104–44. Design Innovation economics Product management Science and technology studies
32010711
https://en.wikipedia.org/wiki/RMS%20Lady%20of%20Mann
RMS Lady of Mann
TSS (RMS) Lady of Mann (No. 145307), was a passenger ship, built by Vickers Shipbuilding and Engineering for the Isle of Man Steam Packet Company at Barrow-in-Furness in 1930, at a cost of £249,073. Certificated to carry 2,873 passengers and 81 crew, she was commissioned to operate on the Island's busy Douglas–Liverpool and Douglas–Fleetwood routes, and had a maximum speed of 23 knots. Her hull was at first the company's conventional black, but was changed to white and green in 1933, only to revert to black after her war service. Service life The year 1930 saw the Isle of Man Steam Packet Company celebrate its centenary, and to mark this, Lady of Mann was to be the largest ship ever built for it to that date. The keel of Lady of Mann was laid on 26 October 1929, and by early spring the following year she was ready for launching. Lady of Mann was launched on 4 March 1930, by Katharine Stewart-Murray, Duchess of Atholl, the Lady of Mann, after whom she was named. Lady of Mann Clyde trials recorded 22.79 knots, but her speed was often over 23 knots on regular service. She was driven by two sets of single-reduction geared turbines; , and developed a shaft horsepower of 11,500. The ship was oil-fired by cylindrical Scotch boilers. Lady of Mann's general design and machinery followed closely that of the , with the improvements gained by the three years operation of that vessel. Her initial work was on the Douglas - Fleetwood service where she took the place of Viking, and engaged on Sunday excursions from that port. During the 1930s, like her sisters and , "Lady of Mann" was painted with a white hull over green. This was a summer colour scheme adopted by the company, and proved immensely popular with the public. All three sisters were exceedingly well appointed vessels, and upon entering service were each met with high acclaim. Interior views of Lady of Mann War service Under the command of her Master Captain T.C. (Daddy) Woods O.B.E., Lady of Mann joined seven of her Steam Packet sisters at Dunkirk and then at the evacuation of the north-western French ports. After this she spent four years on transport work from Lerwick. She then went south and was engaged in the D-Day landings on the Cherbourg Peninsula. Operation Dynamo Requisitioned as a personnel ship at the outbreak of war, she had a good turn of speed, and was able to get in and out of the Dunkirk bombardments and lift 4,262 men back to the relative safety of Dover and Folkestone. She remained for six hours in Dunkirk harbour on 31 May 1940, despite having been damaged by shellfire from shore batteries on her approach and being bombed by enemy aircraft. She emerged from the bombing with little damage and claimed one enemy aircraft shot down. She was back at Dunkirk in the early hours of 1 June and took off 1,500 casualties. The following day, 2 June, she again steamed into Dunkirk but was ordered back for lack of troops, as by this time the evacuation was drawing towards its close. She picked up 18 French soldiers from a small boat on her way back and landed them in England. On the night of 3 June, she made her last trip to the shattered harbour. She berthed alongside the East Pier at a little after midnight on the morning of 4 June, and left for England after embarking another 1,244 troops in little over an hour. Later that afternoon, Operation Dynamo ended. Over the period of the evacuation, Lady of Mann had lifted more troops to safety than any other vessel. Operation Aerial Following Operation Dynamo Lady of Mann was in action once more. She became part of the force of personnel ships assigned to Operation Aerial, the evacuation from the ports of north-west France. She was at Le Havre, Cherbourg and Brest, embarking troops as the enemy advanced in a vast encircling movement. Along with her Steam Packet sister , Lady of Mann was one of the last three ships to leave Le Havre. It was estimated she had 5,000 troops on board as she departed under air attack. Trooping duties Another operation which the Lady of Mann was assigned to was that of ferrying forces personnel from , which served throughout the war as a troop transport ship. Queen Mary would arrive in Belfast from Canada or the United States, turn around quickly and set off again westwards. Lady of Mann was one of several vessels that undertook this important task, taking troops on the final leg of their sea voyage to Greenock. Following her trooping duties Lady of Mann was then taken over by the Admiralty and converted to a Landing Ship Infantry (Hand Hoisting) vessel with a carrying capacity of six landing craft, 55 officers and 435 men. Operation Overlord On D-Day, 6 June 1944 as part of Operation Overlord, Lady of Man was the headquarters ship of the senior officer of the 512th Assault Flotilla, responsible for the landings in the Juno area near Courselles. Later in the month, while still on the Normandy operations, she was retired for repairs. Personnel duties and then went back to her duties as a personnel carrier. She served as such for the remainder of the war, carrying on for some months afterwards moving troops and bringing out displaced persons. She was mostly Channel plying to Ostend and Hook of Holland from such ports as Dover and Harwich. Lady of Mann during her war service Post-war service Lady of Mann returned to her home port, Douglas, on 9 March 1946, where she was given a civic reception. A local paper that week said that during her war service Lady of Mann had carried more than 2,000,000 troops. She was reconditioned by Cammell Laird & Co at Birkenhead and after her proud war service, Lady of Mann returned to her duties with the Isle of Man Steam Packet Company on 14 June 1946. Like her sister , Lady of Mann only sailed during the summer season, and this may go some way to explaining their relatively long lives. Her career continued until August 1971. Lady of Mann made her final sailing from Liverpool at 09:00hrs on 14 August. In the afternoon she made passage from Douglas to Ardrossan, returning the following day, Sunday, 15 August. After a final day in her home port, Douglas, she departed bound for Barrow-in-Furness where she was laid up awaiting sale. On 14 December 1971, Lady of Mann was sold to Arnott Young and Co., Glasgow. She was taken under tow by the tugboat Wrestler on 29 December, arriving at Dalmuir on 31 December, for breaking up. Lady of Mann was an exceedingly popular ship. When she came to be broken up, enthusiasts wrote from all parts of Britain hoping to get souvenirs from her. One of the most elegant features of the "centenary Lady of Mann" was her main staircase which was made of mahogany. It was widely admired, and was another example of the type of furnishing that subsequently had to vanish. Before the ship had finished her service life, the staircase was in contravention of modern fire regulations. Alterations had to be made in the mid-1960s; the stylish opening had to be blocked in with steel walls, and steel doors had to be fitted. The appearance was largely sacrificed; the regulations concerning fire precautions duly satisfied. But when Lady of Mann was due to be broken up in the early 1970s, this much desired staircase was not quite finished. An enthusiast living in Castletown, succeeded in acquiring at least part of the woodwork, and it was reassembled at Lorne House in Castletown, where it remains to this day. Steam Packet officials affectionately described her as having been 'foreman built'. For when she was in the Vickers Armstrong yards at Barrow, the builder was going through the very severe slump which followed the 1929 Wall Street crash. Most of the yard staff had been dismissed and only key men kept on. It was these men who built the ship, and a very fine one she proved. The name Lady of Mann was resurrected by the Isle of Man Steam Packet Company in 1976, when the fourth car ferry joined the fleet. References External links TSS Lady Of Mann 1930 ships Ships of the Isle of Man Steam Packet Company Ferries of the Isle of Man Steamships Steamships of the United Kingdom Merchant ships of the United Kingdom World War II merchant ships of the United Kingdom Ships built in Barrow-in-Furness
69739071
https://en.wikipedia.org/wiki/Transmit%20Security
Transmit Security
Transmit Security is a private cybersecurity and identity and access management company based in Tel Aviv, Israel and Boston, Massachusetts. Founded by Mickey Boodaei and Rakesh Loonkar in 2014, Transmit Security provides companies with customer authentication, identity orchestration, and workforce identity management services. In June 2021, the company completed a Series A funding round by raising $543 million, which was reported to be the largest Series A in cybersecurity history. Transmit Security is a FIDO Alliance Board member. History Transmit Security was co-founded in 2014 by Mickey Boodaei and Rakesh Loonkar. Boodaei and Loonkar previously founded Trusteer in 2006, which was acquired by IBM in 2013 for $1 billion. In November 2020, Transmit Security ranked 5th on Deloitte's "North America Technology Fast 500", a list of the fastest-growing tech companies in North America. In February 2021, Transmit Security joined the FIDO Alliance Board. In June 2021, Transmit Security completed its Series A funding round by raising $543 million from investors. It was reported to be the largest Series A in cybersecurity history. Primary investors included Insight Partners and General Atlantic, with additional investment from Cyberstarts, Geodesic, SYN Ventures, Vintage and Artisanal Ventures. In September 2021, Citi Ventures and Goldman Sachs Asset Management joined as investors. Operations Transmit Security’s main headquarters is located in Tel Aviv, Israel. Its North American headquarters is in Boston, Massachusetts. Additional offices are located in London, Berlin, Tokyo, Hong Kong, Madrid, Sao Paulo, and Mexico City. See also Secret Double Octopus List of unicorn startup companies References Security companies of Israel Security companies of the United States Software companies of Israel Software companies established in 2014 Identity management Identity management systems Federated identity Cloud applications Access control software Password authentication Computer security software Computer security software companies Computer access control
538743
https://en.wikipedia.org/wiki/Ken%20Olsen
Ken Olsen
Kenneth Harry "Ken" Olsen (February 20, 1926 – February 6, 2011) was an American engineer who co-founded Digital Equipment Corporation (DEC) in 1957 with colleague Harlan Anderson and his brother Stan Olsen. Background Kenneth Harry Olsen was born in Bridgeport, Connecticut and grew up in the neighboring town of Stratford, Connecticut. His father's parents came from Norway and his mother's parents from Sweden. Olsen began his career working summers in a machine shop. Fixing radios in his basement gave him the reputation of a neighborhood inventor. After serving in the United States Navy between 1944 and 1946, Olsen attended the Massachusetts Institute of Technology, where he earned both a BS (1950) and an MS (1952) degree in electrical engineering. Career pre-DEC During his studies at MIT, the Office of Naval Research of the United States Department of the Navy recruited Olsen to help build a computerized flight simulator. Also while at MIT he directed the building of the first transistorized research computer. Olsen was an engineer who had been working at MIT Lincoln Laboratory on the TX-2 project. Olsen's most important connection to Project Whirlwind was his work on the Memory Test Computer (MTC), described as "a special purpose computer built to test core memory for the Whirlwind." Unlike the 18-bit TX-0, which was "designed to be a predecessor for a larger 36 bit machine, the TX-2," Whirlwind and the MTC used 16 bits. Digital Equipment Corporation In 1957, Olsen and an MIT colleague, Harlan Anderson, decided to start their own firm. They approached American Research and Development Corporation, an early venture capital firm, which had been founded by Georges Doriot, and founded Digital Equipment Corporation (DEC) after receiving $70,000 for a 70% share. In the 1960s, Olsen received patents for a saturable switch, a diode transformer gate circuit, an improved version of magnetic core memory, and the line printer buffer. (Note that MIT professor Jay W. Forrester is generally credited with inventing the first practical magnetic core memory). Olsen was known throughout his career for his management style and his fostering of engineering innovation. Olsen's valuing of innovation and technical excellence spawned and popularized techniques such as engineering matrix management, that are broadly employed today throughout many industries. Olsen valued humility, he drove an economy car and kept a simple office in an old mill building. He also was an accomplished pilot and flew his own plane. In 1977, referring to computers used in home automation at the dawn of the home computer era, Olsen is quoted as saying "There is no reason for any individual to have a computer in his home." Olsen admitted to making the remark, even though he says his words were taken out of context and he was referring to computers set up to control houses, not PCs. According to Snopes.com, "the out-of-context misinterpretation of Olsen’s comments is considered much more amusing and entertaining than what he really meant, so that is the version that has been promulgated for decades now". In 1986, Fortune Magazine named Olsen "America's most successful entrepreneur", and the same year he received the IEEE Engineering Leadership Recognition Award. Olsen was the subject of a 1988 biography, The Ultimate Entrepreneur: The Story of Ken Olsen and Digital Equipment Corporation written by Glenn Rifkin and George Harrar. In 1993, the Institute of Electrical and Electronics Engineers awarded Olsen their IEEE Founders Medal. He was inducted as a Fellow of the Computer History Museum in 1996. He was awarded the Vermilye Medal in 1980. He was inducted as an Honorary Member of UPE (the International Honor Society for the Computing and Information Sciences) on October 8, 1975. In 2011, he was listed at #6 on the MIT150 list of the top 150 innovators and ideas from MIT for his work on the minicomputer. Later career history Commencing in 1987 Olsen in public appearances described UNIX as "snake oil". Some believed he was making a general characterization of UNIX, while others believed he was specifically referring to its marketing exaggerating its benefits. While Olsen believed VMS was a better solution for DEC customers and often talked of the strengths of the system, he did approve and encourage an internal effort to produce a native BSD-based UNIX product on the VAX line of computers called Ultrix. However, this line never got enthusiastic comprehensive support at DEC. Olsen was forced to retire from DEC, stepping down as president in 1992. He subsequently became the chairman of Advanced Modular Solutions. Olsen was also a major contributor to The Family, a religious and political organization. Olsen was a trustee of Gordon College in Wenham, Massachusetts. There, the Ken Olsen Science Center was named after him in 2006, and dedicated on 27 September 2008. Its lobby features a Digital Loggia of Technology, documenting Digital's technology and history, and an interactive kiosk to which former employees have submitted their stories. Death Olsen died while in hospice care in Indianapolis, Indiana on February 6, 2011, aged 84. Gordon College, where he was a trustee and board member, announced his death, but did not reveal the cause. His family also did not comment on any details surrounding his death. Awards 1993: Institute of Electrical and Electronics Engineers in 1993 awarded Olsen the IEEE Founders Medal. 1996: The Computer History Museum in 1996 named Olsen a Museum Fellow "for his introduction of the minicomputer and co-founding of Digital Equipment Corporation (DEC)." 1999: Olsen was elected to the American Philosophical Society References Further reading Earls, Alan R. Digital Equipment Corporation. Arcadia Publishing, 2004. Schein, Edgar H. DEC Is Dead, Long Live DEC: The Lasting Legacy of Digital Equipment Corporation Berrett-Koehler Publishers, 2004. External links 1988 Transcript of an Oral History Interview with Ken Olsen Ken Olsen, New England Economic Adventure Ken Olsen, co-founder of DEC, dies at 84, Mass High Tech DEC co-founder Kenneth Olsen mourned by computer industry Network World DEC : The mistakes that led to its downfall Birbeck College, University of London Archives and records Kenneth H. Olsen Collection at Baker Library Special Collections, Harvard Business School. 1926 births 2011 deaths Businesspeople from Bridgeport, Connecticut Businesspeople in computing Digital Equipment Corporation people Computer hardware engineers MIT School of Engineering alumni National Medal of Technology recipients American computer businesspeople American people of Norwegian descent MIT Lincoln Laboratory people 20th-century American businesspeople
61431066
https://en.wikipedia.org/wiki/Anders%20P.%20Ravn
Anders P. Ravn
Anders Peter Ravn (29 October 1947 – 1 August 2019) was a Danish computer scientist. Anders P. Ravn was born in 1947 in Caracas, Venezuela, the son of Niels and Henny (Sønder) Ravn. He arrived in Denmark in 1948. Ravn received a Master of Science (M.Sc.) degree in Computer Science and Mathematics from the University of Copenhagen in 1973 and a Doctor of Technology (Dr.Tech.) degree in Computer Science from the Technical University of Denmark in 1995. Between 1969–73, Anders Ravn was a teaching assistant in the Department of Computer Science at the University of Copenhagen (DIKU). From 1972–76, he was a systems programmer on minicomputers at the early Danish computer company A/S Regnecentralen. He returned to academia and rose from assistant professor (1976–80) to associate professor (1980–84) at DIKU. During 1982–3, he was a visiting scientist at IBM's Thomas J. Watson Research Center in Yorktown Heights, New York, United States. He joined the Department of Computer Science at the Technical University of Denmark (ID-DTH) as a lecturer (1984–9) followed by reader (1989–99) in the Department of Information Technology. During this time, he was also an adjunct lecturer in the Department of Mathematics at the Royal Veterinary and Agricultural University in Denmark (1985–9), guest researcher at Oxford University (1989–90), and visiting professor at the Institut für Praktische Mathematik und Informatik, University of Kiel in Germany (1994). He participated in the ESPRIT ProCoS project on Provably Correct Systems. He was a Research Professor in the Department of Computer Science at Aalborg University (1999–2004), before being appointed Professor of Computing Science there (2004–16). He then retired, becoming an Emeritus Professor. Ravn specialized in research into formal methods, especially for embedded systems and hybrid systems. In particular, he worked with Zhou Chaochen and Tony Hoare on the development of duration calculus for real-time systems. He became a member of the Dansk Selskab for Datalogi (DSfD) in 1972, the Association for Computing Machinery (ACM) in 1983, and was a Life Member of the IEEE Computer Society. He was also a member of the IFIP Working Group 2.2 (covering the Formal Description of Programming Concepts) from 2002. Anders Ravn received Ulrik and Marie Brinch's honorary award in 1996. He was honoured as a Knight of the Order of the Dannebrog in 2015. He married Annemette Lind on 31 August 1973 and they had two children. Ravn died in Copenhagen on 1 August 2019. Selected publications Ravn produced many research papers and edited several books: References External links Home page Anders P. Ravn on ResearchGate Anders P. Ravn publications index by DBLP Anders P. Ravn on the Danish National Research Database 1947 births 2019 deaths People from Caracas University of Copenhagen alumni Technical University of Denmark alumni Danish computer programmers Danish computer scientists IBM employees University of Copenhagen faculty Technical University of Denmark faculty Aalborg University faculty Formal methods people Knights of the Order of the Dannebrog
6747808
https://en.wikipedia.org/wiki/GNewSense
GNewSense
gNewSense is a Linux distribution that was active from 2006 to 2016. It was based on Debian, and developed with sponsorship from the Free Software Foundation. Its goal was user-friendliness, but with all proprietary (e.g. binary blobs) and non-free software removed. The Free Software Foundation considered gNewSense to be composed entirely of free software. gNewSense took a relatively strict stance against proprietary software. For example, any documentation that gave instructions on installing proprietary software was excluded. gNewSense's last release was made in 2016 and it has not had a supported version since 2018. DistroWatch classifies gNewSense as "dormant". History The project was launched by Brian Brazil and Paul O'Malley in 2006. gNewSense was originally based on Ubuntu. With the 1.0 release, the Free Software Foundation provided assistance to gNewSense. With no releases in two years, on 8 August 2011, DistroWatch classified gNewSense as "dormant". By September 2012 DistroWatch had changed the status to "active" again, and on 6 August 2013, the first version directly based on Debian, gNewSense 3 "Parkes", was released. There have been several indications that it may be restarted, including a website announcement in 2019, but the project has remained inactive, with no releases since 2016. DistroWatch returned it to "dormant" status again in 2019. , the home page of the project's website displayed a blank page with a meme labelling the Free Software Foundation a cult. After a short time, the website then redirected to the home page of the PureOS website. However, as of June 2021, it now redirects to the FSF's list of Free/Libre distros. Technical aspects By default gNewSense uses GNOME. The graphical user interface can be customized with the user's choice of X display manager, window managers, and other desktop environments available to install through its hosted repositories. The Ubiquity installer allows installing to the hard disk from within the Live CD environment without the need for restarting the computer prior to installation. Besides standard system tools and other small applications, gNewSense comes installed with the following software: the LibreOffice productivity suite, the GNOME Web internet browser, the Empathy instant messenger, and the GIMP for editing photos and other raster graphics. Common software development tools including the GCC are installed by default. Installation The Live CD can be used to run the operating system and to install onto disk. CD images are available for download. Versions gNewSense has made four major releases: In 2016, gNewSense announced that the next version of gNewSense would be 5.0. Comparison with other distributions Non-free software repositories are not provided by the gNewSense project, and most non-free documentation and artwork have been removed. While it was based on Ubuntu, the "Universe" package repository was enabled by default. In order to avoid trademark problems that stem from the modification of Mozilla Firefox, gNewSense 1.1 rebranded it as "BurningDog". BurningDog likewise does not offer to install non-free plugins for various web media, such as Adobe Flash. gNewSense 2.0 abandoned BurningDog and adopted the Epiphany web browser (later renamed simply "Web"), a component of GNOME, as its default browser application, and came with recommendations and instructions to optionally compile and run GNU IceCat. gNewSense 3.0 retains Web as the default browser, but also comes with a modified version of Debian's Iceweasel that does not offer to access proprietary add-ons. Debian is another Linux distribution known for strict licensing requirements and adherence to free software principles. While both Debian and gNewSense rigorously exclude non-free software and binary blobs from their official releases, Debian maintains and hosts unofficial repositories of non-free software and firmware binaries, and Debian free software sometimes depends upon or suggests the optional installation of proprietary software, under the theory that users' own informed discretion about the use of such software should be paramount, as expressed in Clause 5 of the Debian Social Contract (though Debian's democratic project management has seen this stance become a source of recurrent controversy). gNewSense, by contrast, does not provide any packages which depend on or suggest the use of non-free software, firmware, extensions, or plugins, nor does the gNewSense Project provide convenience-access to proprietary software for any reason, seeing this as an abrogation of the commitment to the development of free software solutions. Similar to Debian, gNewSense policies do not allow including documentation that are licensed under the GNU Free Documentation License with invariant sections. This includes many manuals and documentation released by the GNU Project themselves. While gNewSense was initially forked from Ubuntu (itself originally a fork of Debian) as a result of founding developer Paul O'Malley's prior work with Ubuntu, as of gNewSense 3.0 the distribution has tracked Debian as the base for its software distribution. In part this has been because the Debian Project does carefully disaggregate the free software in its official distribution from the proprietary software it provides courtesy access to. Indeed, many of the packages, including Debian-particular packages (such as Iceweasel and Icedove) ported to gNewSense are simply modified in such a way that they no longer provide such courtesy access to non-free software options. Limitations Since gNewSense's repositories contain only free software, support for hardware which requires firmware and for which no free firmware exists (such as some wireless network cards) is not available. By 1 May 2008, 3D graphics and application support had also been removed because of licensing issues with Mesa 3D. After January 13, 2009, those issues had been resolved and 3D support became standard starting with the 2.2 release. Reception In reviewing gNewSense 3.0 in August 2013, Jesse Smith of DistroWatch noted that many of the applications provided, including OpenOffice.org 3, Debian's de-blobbed 2.6.32 Linux kernel (based on Linux-libre tools), Iceweasel 3.5 and GNOME 2.30 were quite out of date. Smith concluded this review with the following words: Generally speaking, I was happy with gNewSense 3.0. Being based on Debian, the distribution can be counted on to provide both stability and amazing performance. The distribution is lean, fast and uncluttered. The flip side to this is gNewSense's system installer and default package management tools are geared more toward experienced users and will probably provide a steep learning curve to novice Linux users. Not much is automated and there is a minimum of hand holding. The main feature of gNewSense, the lack of proprietary software, is also a double-edged blade. On the one hand, it means the entire operating system can be audited, modified and redistributed. This is great from the perspective of software freedom. The fact that the distribution can play most multimedia formats and handled Flash content fairly well is a testament of the power of free and open source software. The one problem I ran into with gNewSense's software policy was with regards to my wireless network card. Most distributions ship with the non-free Intel firmware, but gNewSense doesn't include it and this means the distribution isn't a good fit with my laptop. It is, on the other hand, a great match with my desktop system. Richard Stallman, founder and former president of the Free Software Foundation, said he used gNewSense in January 2010 and he was still using it in April 2014. Since then Stallman has switched to Trisquel. Serdar Yegulalp reviewed gNewSense for InfoWorld. He is said: In the post on Network World of gNewSense 3.1 in February 2014, Bryan Lunduke reviewed this Linux distribution with following words: See also Comparison of Linux distributions List of distributions based on Debian GNU/Linux naming controversy Trisquel Parabola GNU/Linux-libre Uruk GNU/Linux References External links 2006 software Debian-based distributions Free software only Linux distributions Operating system distributions bootable from read-only media Linux distributions
38683780
https://en.wikipedia.org/wiki/Phase2%20International%20%28company%29
Phase2 International (company)
Phase2 International (Phase 2) is a privately owned, US-based cloud computing provider headquartered in Honolulu, HI. The company offers hosted business software and cloud servers, including information technology knowledge, security, compliance, and an integrated suite of software applications on a customizable monthly subscription basis. Phase 2 serves the Federal, SMB, and enterprise markets, providing access to IBM, Microsoft and other business software. Background Phase 2 International's stated mission is to participate in the ongoing business-software environment's transformation from purchased to cloud software models. In April 2008, the company introduced a reseller program to supplement its application offerings. Phase2's CEO is Kevin Doherty. Products Hosted IBM Lotus Notes, IBM Connections, IBM Lotus Sametime, IBM WebSphere Portal. Hosted Microsoft SharePoint, Microsoft Exchange, Microsoft Team Foundation Server, and Microsoft Project Server. Other products offered are Cloud Servers (Windows, Linux, Domino), and Security Hardened Cloud Servers with FISMA, HIPAA, GLBA compliance. References External links http://arquivo.pt/wayback/20081025005451/http%3A//www.phase2.com/ Software companies based in Hawaii Companies based in Honolulu Software companies established in 2002 2002 establishments in Hawaii Software companies of the United States Privately held companies based in Hawaii Companies established in 2002 Cloud computing providers Business software companies
41306226
https://en.wikipedia.org/wiki/IMS%20security
IMS security
IMS (IP Multimedia Subsystem) is a set of specifications to offer multimedia services through IP protocol. This makes it possible to incorporate all kinds of services, such as voice, multimedia and data, on an accessible platform through any Internet connection (fixed or mobile). IMS's origin Initially defined by 4G.IP (a set of companies belonging the telecommunications sector), it was 4G (3rd Generation Partnership Project) who definitively adopted the definition of IMS as a part of the standardization 4G system in networks UMTS (Universal Mobile Telecommunications System), specified in Release 5 and 6. Architecture It can be divided into three layers: Application Where there are AS (Application Servers), the MRF (Media Resource Function) and a HSS (Home Subscriber Server). The AS used the SIP(Session Initiation Protocol) for the signaling, used in establishing multimedia sessions, such as audio and video calls over Internet. The services offered by the telephony operators are hosted and run on AS. A HSS is similar devices to the HLR of GSM technology, where the user´s credentials are stored. Control Formed by different subsystems among which is IMS core. Other important devices in this layer are the CSCF (Call Session Control Function), which includes three subsystems: P-CSCF (Proxy CSCF), S-CSCF (Serving CSCF) and I-CSCF (Interrogating CSCF). These subsystems are the responsible, basically, of: processing and routing the signaling; to control the resources of the transport subsystem, to register and authenticate users; provisioning IMS services by diverting signaling application servers in question and to generate billing records. The MRF (Media Resources Function) provides functions related to media, such as the manipulation of the media and the reproduction of tones and announcements. Each MRF divides into a MRFC (Media Resources Function Controller) and a MRFP (Media Resources Function Processor). The MRFC is a signaling plane node that interprets the information coming from an AS and S-CSCF to control the MRFP. The MRFP is a node of the plane of the media, is used to mix the source or process media streams. Transport Composed by the UE (User Equipment), the access network, the NASS (Network Attachment Subsystem) and the RACS (Resource Admission Control Subsystem). The transport of network is performed using either IPv6 or IPv4, allowing QoS's implementation, integrated security, autoconfiguration… Security Having seen a little of what is IMS and the devices that act, we enter IMS specifications relating to security. From the point of view of the standardization, only exists a mechanism of authentication and access control, specified in the TS 33.203 of 3GPP (Access Security for IP-Based Services) and commonly called AKA (Authentication and Key Agreement). However, there are many other mechanisms for authentication and access control, defined to meet the needs of inherited terminals and to enable faster deployment. The most common are: Early IMS 3GPP for mobile access. They are those that IMS deployments in advance for your time are not entirely compatible with the specifications so that the security mechanisms are not applicable. Examples include IPv4 based implementations as 2G devices. Digest authentication of TISPAN and Packet Cable. NASS-IMS authentication inseparable TISPAN for fixed networks. It is an authentication method in which it is intended to reuse the authentication layer in IMS network. It was developed by TISPAN for fixed networks in which the user terminal does not have an ISIM (IP Multimedia Services Identity Module). The security of this mechanism is practically the same as that of the access network. Digest Authentication with Packet Cable's TLS. The existing variety of authentication mechanisms used in networks, causes problems related with the interoperability, the existence of networks with different security levels, the most adapted method selection during the client registration, etc. In this respect, 3GPP has developed the recommendation TR 33.803 to guide in selecting the most appropriate authentication method. AKA (Authentication and Key Agreement) The security in IMS is based on a secret key of long duration shared between the ISIM and the AUC (Authentication Center) of the local network. ISIM: it is an application that runs on a smart card UICC (Universal Integrated Circuit Card) that contains the identification parameters and authentication of the IMS user. AUC: Associate in this case to the HSS. Contains information required to perform the authentication and encryption services. Stores the authentication and encryption algorithms and generates the necessary keys for each service. The AKA used to establish both the encryption keys (3DES or AES-CBC) and the integrity keys (HMAC-MD5 or HMAC-SHA-1). ISIM ↔ HSS: Required for the mutual authentication. Both the HSS and the ISIM have stored a secret key and private identification (IMPI) associated with that key. UA ↔ P-CSCF: Ensures a secure link between the UE and network. I/S-CSCF ↔ HSS: Establishes a security association for information transfer between the I/S-CSCF and the HSS. P-CSCF ↔ I/S-CSCF: This security association applies only when the P-CSCF is not in the Home Network. I-CSCF ↔ S-CSCF: Provides security between SIP nodes within a network. Registration process Before a user can get access to IP Multimedia services, it must register at least one IMPU (IP Multimedia Public Identity), such as a telephone number. Then the IMS network must authenticate the IMPI (IP Multimedia Private Identity) at application. The registration process is initiated by the IMS terminal sending a SIP REGISTER message to the P-CSCF directed his IMPI and IMPU. This message reaches the P-CSCF, and it forwards the message to the I-CSCF. The I-CSCF sends a DIAMETER message authentication request of the user who sent the REGISTER message, DIAMETER UAR to HSS, who responds with another message DIAMETER UAA and parallel to I-CSCF informs the direction of the S-CSCF assigned to the user. Then the I- CSCF forwards the registration message to the S-CSCF, which in turn sends the message DIAMETER MAR including IMPI, which is used by the HSS to calculate the Authentication Vector (AV) and generates the quintuple < RAND, AUTN, XRES, CK, IK > and returns the S-CSCF to fivefold through DIAMETER MAA message. This message is an indication that the network is requesting that the terminal uses its security algorithms in order to authenticate. Then the S-CSCF sends the SIP 401 Unauthorized message accompanied by four of the five parameters making up the AV to I-CSCF, which forwards the message to the P-CSCF. Again, the P-CSCF forwards the message to the UE but leaving him only two parameters, the RAND and AUTN. Since the terminal has the same secret key that has a corresponding HSS, the user can calculate the AUTN. If this matches the one received from the network, the network is considered legitimate. The UE also calculates its response RES which is sent to another SIP REGISTER message with IMPI and ARPU. This message reaches the P-CSCF which forwards the I-CSCF. After the I-CSCF sends a DIAMETER UAR to HSS who responds with the address of S-CSCF through a DIAMETER UAA message. Then the I-CSCF forwards the registration message with the RES to S-CSCF. The latter sends the message DIAMETER SAR to the HSS who replies with DIAMETER SAA. If the RES parameter sent by the user is equal to XRES had calculated the HSS during the first registration attempt, then the HSS authenticates the user by means of the message DIAMETER SAA. Finally the S-CSCF sends a SIP 200 OK message to P-CSCF, which forwards it to the user. Security processes are always executed by the Home Network, even if the user is roaming. Support confidentiality of SIP messages between the UE and the P-CSCF through the use of is provided. IMS Access Security for SIP According to 3GPP specifications, user authentication must be based on Digest AKA, somewhat analogous to the UMTS (Universal Mobile Telecommunications System) access authentication but for SIP. The 3GPP specification TS 33.203 exposed to signalling between the user agent and the P-CSCF should be based on IPsec ESP (Encapsulating Security Payload) in transport mode. However, the use of IPSec in this mode was not suitable for use in fixed networks. The problem lay in the intersection IPsec NAT (Network Address Translation), so TISPAN (Telecommunications and Internet Convergence Services and Protocols for Advanced Networks) mode selected UDP (User Datagram Protocol) encapsulation of IPsec. GAA (Generic Authentication Architecture) All security mechanisms we've seen are used in access networks and IMS domains. However, it is possible to extend the above authentication mechanisms at the application or service using what is known as GAA. The GAA is the authentication architecture that makes it possible to extend the existing authentication mechanisms in IMS application layer/service. GAA employs two authentication mechanisms. One is based on the possession of a shared secret between the communicating entities (GBA-Generic Bootstrapping Architecture) derived from the keys used in the AKA authentication, and the other based on asymmetric cryptography (public and private key) and digital certificates or PKI (SSC - Support for Subscriber Certificates). Authentication using a shared secret Of the two types of implementation, the most used is based on shared secrets. The great advantage of GAA/GBA is that it allows the creation of security associations between the user agent and the various applications. These partnerships consist primarily to share a key (the shared secret), which allows subsequent user agent authentication against the application, and, if necessary, other security features such as the guarantee of confidentiality and integrity of information (through encryption and digital signature), non-repudiation (digital signature), etc. The problem with these mechanisms is the way to agree on this shared secret. As I mentioned earlier, the secret is derived from the authentication keys used in AKA. A new network element called BSF (Bootstrapping Server Function) is introduced. This BSF has an interface with the HSS. The UE runs AKA with the HSS via the BSF. An application server called NAF (Network Application Function) can retrieve this session key from the BSF, with the subscriber profile information. Thus, NAF server applications and UE share a secret key that can then be used for security application, in particular to authenticate the UE and the NAF in the beginning of the application session (possibly for the integrity and/or protection of confidentiality). The communication between the UE and the BSF as well as between and among NAF and BSF and HSS, are independent of the application. Asymmetric cryptography based authentication and certificates An alternative to the use of shared secrets for authentication is the use of asymmetric cryptography. This means that the entity that wants to be authenticated must have a key pair (public and private) and validating a digital certificate key pair. Once in possession of the key and the certificate, the UE can use them to produce digital signatures. The main disadvantage of this type of authentication is that you need a PKI and asymmetric key operations require more computational effort. If a customer wishes to use asymmetric encryption technology, you need a digital certificate issued by a CA (Certification Authority). The certificate binds a public key to the identity of their respective owners. If a mobile subscriber wants to have and use a pair of keys (private and public), the certificate must be pre-installed or the subscriber must have the means to generate or obtain a key pair and, likewise, to dynamically obtain one digital certificate. To obtain a digital certificate dynamically a UE should send an application for a site certificate to PKI, and PKI portal must authenticate the certificate request. The key pair and digital certificate can also be used for the integrity and protection, but these are not part of the scope of the GAA. Liberty Alliance and SSO (Single Sign On) The Liberty Alliance is a group of companies dedicated to creating specifications related to authentication, privacy and identity management applications users online. One of the concepts handled is the SSO (Single Sign On), in which a user needs to authenticate only once to access various applications or services. The 3GPP has introduced a recommendation for the combination of GAA/GBA and SSO and authentication mechanisms defined by Liberty Alliance and SAML v2.0. Thus, it is possible to benefit from strong authentication based on AKA, the mechanisms defined by Liberty Alliance and SAML v2.0 SSO to provide. However, the biggest disadvantage of GAA / GBA is designed for user agents that have some kind of support card. OMA specified authentication solutions, for example based on HTTP Digest with user credentials, for terminals that do not have an ISIM card. Attacks Network snoop Breaking confidentiality. Without the protection with SSL/TLS or IPSec, it will be easy for an attacker to capture the SIP signalling and RTP (Real-time Transport Protocol) traffic using tools like Wireshark. Another attack against confidentiality can be realized by using scan tools to gather sensitive and valuable information about IMS components, operating systems and network topology. Session hijacking Directed integrity of session. The attacker can insert malicious packets in a session and can even replace some of the traffic. For example, the attacker can send SIP Re-Invite to modify the parameters of the session. DoS (Denial of Service) Attack against availability. The attacker sends a large number of datagrams in a short period of time, causing degradation of performance or completely stopping services. Examples include TCP SYN floods, UDP floods... P- CSCF Discovery Concerns integrity and availability. The P-CSCF is the entry point to the UE. DHCP (Dynamic Host Control Protocol) and DNS (Domain Name System) are commonly used to discover the P-CSCF. An attacker can break the process of P-CSCF discovery cache poisoning DNS for a domain name or IP false is returned to the UE. The result is that the UE cannot be registered to the network or is registered to a fake server. Service Abuse Impact availability and integrity of IMS. Authorized users can use the services more than expected or gain access to services that are not allowed for them. Toll Fraud Attack on the accounting. An attacker can forge a UE and send a Bye request to CSCF. The CSCF will think that the session is end, and stop accounting at this time the UE don’t release the media streams. This means that the UE continues exchanging flows without being counted. This threat calls media theft, and use the weakness of lack of effective control of media streams. Permission Acquisition Attack authentication. An attacker can obtain the password authentication due to a crack or other methods. Basically, a UE does not have a SIM card used, as mentioned above, HTTP Digest. This method is based on a username and password, which usually is not high security level. HTTP Digest lists several attacks, such as brute force or a replay attack. Mitigated To mitigate these attacks on the IMS network that must be met: The subscriber access to the IMS network with strong authentication. Network security: the flow exchanged between clients and application servers must be secured. Systems and applications must be secured. See also 4G Softswitch Voice over IP Mobile VoIP SIMPLE 3GPP Long Term Evolution, UMB (4G network efforts that will use technologies like IMS) Mobile Broadband Peer-to-peer video sharing Video share Image share IP Connectivity Access Network Text over IP Multimedia Telephony Voice call continuity Push to talk IMPS Rich Communication Suite Service Capability Interaction Manager References http://www.etsi.org/deliver/etsi_tr/133900_133999/133919/06.02.00_60/tr_133919v060200p.pdf http://www.3gpp.org/ http://edadmovil.wordpress.com/casos-de-desarrollo/implementacion-ims/funcionamiento-ims/ https://web.archive.org/web/20131213060910/http://departamento.pucp.edu.pe/ingenieria/images/documentos/seccion_telecomunicaciones/IMS/Capitulo%207%20QoS%20y%20Seguridad.pdf https://web.archive.org/web/20131218095643/http://blog.s21sec.com/2008/12/ims-introduccin.html http://www.efort.com/media_pdf/IMS_ESP.pdf IMS services Telecommunications infrastructure
58177023
https://en.wikipedia.org/wiki/David%20X.%20Marks%20Tennis%20Stadium
David X. Marks Tennis Stadium
The David X. Marks Tennis Stadium is a tennis facility located on the campus of the University of Southern California in Los Angeles. The facility, built in 1971, serves as the home of the USC Trojans men's and women's tennis teams. The facility provides six outdoor tennis courts and has a seating capacity of 1,000. The stadium is named for David X. Marks, a World War I pilot. Renovations In 2015, The Buntmann Family Tennis Center was added and includes a new entrance and lobby. The renovation included suites with lockers, showers, team meeting rooms and lounges. The renovation also included a new training room, storage area and large multi-purpose room. In 2005, a new LED scoreboard was installed. In 2002, 700 chair-back seats were added replacing bleacher seating. Events The 1974 men’s NCAA Tennis Tournament was held at the stadium. Gallery References External links David X. Marks Tennis Stadium at usctrojans.com College tennis venues in the United States Tennis venues in Los Angeles USC Trojans men's tennis USC Trojans women's tennis USC Trojans sports venues Sports venues completed in 1971 1971 establishments in California
7552376
https://en.wikipedia.org/wiki/Broadcast%20signal%20intrusion
Broadcast signal intrusion
A broadcast signal intrusion is the hijacking of broadcast signals of radio, television stations, cable television broadcast feeds or satellite signals without permission nor license. Hijacking incidents have involved local TV and radio stations as well as cable and national networks. Although television, cable and satellite broadcast signal intrusions tend to receive more media coverage, radio station intrusions are more frequent, as many simply rebroadcast a signal received from another radio station. All that is required is an FM transmitter that can overpower the same frequency as the station being rebroadcast. Other methods that have been used in North America to intrude on legal broadcasts include breaking into the transmitter area and splicing audio directly into the feed. As a cable television operator connects itself in the signal path between individual stations and the system's subscribers, broadcasters have fallen victim to signal tampering on cable systems on multiple occasions. Notable incidents Southern Television On November 26, 1977, an audio message, purporting to come from outer space and conveyed by an individual named ‘Vrillon’ of the ‘Ashtar Galactic Command’, was broadcast during an ITN news bulletin on Southern Television in the United Kingdom. The intrusion did not entirely affect the video signal but replaced the program audio with a six-minute speech about the destiny of the human race and a disaster to affect "your world and the beings on other worlds around you". The IBA confirmed that it was the first time such transmission had been made. None of the individuals responsible for the intrusion have been identified, although some people believe that the message could have originated from an alien race. However, others have pointed out that the fact the "alien" voice spoke with a British accent and the fact that the "aliens" only managed to hijack a TV station in rural southern England make it more likely the source was a local human, possibly even one or more broadcast students or a Southern Television employee, pulling a hoax. Captain Midnight At 12:32 a.m. Eastern Time on April 27, 1986, HBO (Home Box Office) had its satellite signal feed from its operations center on Long Island in Hauppauge, New York interrupted by a man calling himself "Captain Midnight". The interruption occurred during a presentation of The Falcon and the Snowman. The intrusion lasted between 4 and 5 minutes and was seen by viewers along the East Coast. The man, who during the interruption also threatened to hijack the signals of Showtime and The Movie Channel, was later caught and identified as John R. MacDougall of Ocala, Florida. He was prosecuted shortly thereafter. Authorities were tipped off by a man from Wisconsin in a phone booth at a rest area off Interstate Highway 75 in Gainesville, Florida. The man filing the report said that he overheard MacDougall bragging about the incident. MacDougall's guilt was confirmed by an FCC investigation that showed he was alone at Central Florida Teleport at the time of the incident and a recording of the jamming video showed that the text was created by a character generator at that location. He was charged with transmitting without a radio license in violation of . MacDougall pled guilty and was fined $5,000 and served a year probation. Ambiguity about whether the 47 USC 301 charge was applicable since the transmitter had a license resulted in the passage of which made satellite jamming a felony. MacDougall was able to perform the intrusion while working a second job as a master control operator at a satellite teleport in Florida, where he worked to make ends meet due to declining income from his satellite TV equipment business. He stated that he did it because he was frustrated with HBO's service rates and that it was hurting his business selling satellite dishes (hence his second job at the teleport). The message, placed over SMPTE color bars, broadcast by MacDougall read: Max Headroom incidents On the night of November 22, 1987, an unidentified man wearing a Max Headroom mask appeared on the signals of two television stations in Chicago, Illinois. WGN-TV, owned by Tribune Broadcasting, was hijacked first. The intrusion occurred during the sports report on its 9:00 p.m. newscast and lasted about 25 seconds. Next came PBS affiliate WTTW, where the man was seen and heard uttering garbled remarks before dropping his pants, partially exposing his buttocks, and was then spanked by a woman wearing a French maid costume with a flyswatter before normal programming resumed. The next interception occurred at about 11:00 p.m. during an episode of the Doctor Who serial, "Horror of Fang Rock", and lasted almost 90 seconds. None of the individuals responsible for the intrusion have been identified. This incident got the attention of the CBS Evening News the next day and was talked about nationwide. The HBO incident was also mentioned in the same news report. Playboy Channel religious message A broadcast of softcore pornography on the Playboy Channel was disrupted with a text-only religious message on Sunday, September 6, 1987. The message read, "Thus sayeth the Lord thy God: Remember the Sabbath and keep it holy. Repent, the kingdom of Heaven is at hand" (from the Bible verses Exodus 20:8 and Matthew 4:17). Thomas Haynie, an employee of the Christian Broadcasting Network, was convicted of satellite piracy in connection with the incident. Haynie, who pleaded his innocence, was the first person convicted under a new federal law which had made satellite hacking a felony following the Captain Midnight and Max Headroom incidents. According to investigators, it was the religious content of the transmission and the type of equipment used that drew them to CBN. The jamming signal left behind subtle technical clues that were captured on a VHS recording made at the Playboy Channel's uplink at the time of the event – like finding "fingerprints" in the video. After investigators were confident that they identified the brand of transmitter and character generator from the video, they concluded that CBN was the culprit. Haynie, of Virginia Beach, Virginia, was on duty at his job as an uplink engineer at the time of the jamming. CBN maintained that the FCC's case was entirely circumstantial since there were no witnesses and the signal could not be traced to a point of origin. During the investigation, experts on both sides attempted to recreate the incident with CBN's equipment. According to CBN spokesman Dino McCann they were unsuccessful. Furthermore, CBN asserted that there was not enough power for Haynie to jam Playboy's signal but during the trial, government witnesses said the CBN station was capable of interfering with satellite transmissions. After initially being deadlocked, the jury eventually sided with the prosecution and convicted Haynie on two of six counts. (Haynie was acquitted of similar charges of interfering with the American Exxxtasy channel; a recording of the event was of such poor quality that it was unusable.) Haynie received three years of probation, a $1,000 fine, and 150 hours of community service. "Telewizja Solidarność" (TV Solidarity) In September 1985, four astronomers at Poland's University of Toruń (Zygmunt Turło, Leszek Zaleski, Piotr Łukaszewski and Jan Hanasz) used a home computer, a synchronizing circuit, and a transmitter to superimpose messages in support of the labor movement Solidarność (Solidarity) over state-run television broadcasts in Toruń, including an episode of 07 zgłoś się. The messages read "Dość podwyżek cen, kłamstw i represji. Solidarność Toruń" ("Enough price increases, lies, and repressions. Solidarity Toruń") and "Bojkot wyborów naszym obowiązkiem." ("It is our duty to boycott the election", referring to the Sejm elections of 1985) with the Solidarity logo. The four men were eventually discovered and were charged with "possession of an unlicensed radio transmitter and publication of materials that could cause public unrest". At their sentencing, the judge noted their prize winning work in the Polish scientific community and gave each of them probation and a fine of the equivalent of US$100 each (or 3,000,000 old złoty, 300 PLN in today's currency). Soviet pirate broadcasting Broadcast signal intrusion was a common practice in the Soviet Union during the 1970s and 1980s due to the absence of and high demand for any non-government broadcasting. As early as 1966, there was a report of an incident in the city of Kaluga where an 18-year-old had broadcast a hoax announcement that nuclear war had broken out with the United States. In the mid-1970s so many pirates were operating around the city of Arkhangelsk, especially at night, that local people were urged to telephone reports of violators to a special number. Hijackers using call signs such as "Cucumber", "Radio Millimeter", "Green Goat", "Fortune", and others, would overpower the signal on relay stations for wired radio networks to transmit their programming, or transmit into wired radio networks during gaps in normal programming. Even though the incidents appear to have been fairly common according to reports from the BBC, most were not publicly acknowledged for policy reasons. Reports in newspapers typically referred to the hijackers as "radio hooligans broadcasting drivel, rudeness, vulgarity, uncensored expressions, and trashy music". State news organizations also attempted smear campaigns against such pirate broadcasters, claiming that they had interfered with a state frequency used by Aeroflot, "preventing a doctor in an air ambulance from transmitting information about a patient". 2006 Lebanon War During the 2006 Lebanon War, Israel overloaded the satellite transmission of Hezbollah's Al Manar TV to broadcast anti-Hezbollah propaganda. One spot showed Hezbollah leader Hassan Nasrallah with crosshairs superimposed on his image followed by three gunshots and a voice saying "Your day is coming" and shots of the Israeli Air Force destroying targets in Lebanon. WBLI and WBAB On the morning of Wednesday, May 17, 2006, the signal of Babylon, New York FM radio station WBAB was hijacked for about 90 seconds while the signal jammers broadcast the song "Nigger Hatin' Me" by the 1960s-era white supremacist country singer Johnny Rebel. Roger Luce, the station's morning host, said at the time, "Whatever that was - it was very racist... 22 years at this radio station - I've never seen anything like this." The incident made all the local newscasts that night. The next morning, it made the front page on Newsday with the headline "JACKED FM". The station's new general manager, John Shea, said, "I've only been here a week and we get hijacked." Former program director John Olsen said, "This was not some child's prank, this was a federal offense." The hijack was likely accomplished by overpowering the studio transmitter link (STL) signal to the transmitter in Dix Hills. A signal hijacking with the same song happened to WBAB's sister station WBLI about two weeks earlier on a Sunday night. "The Winker's Song" incidents In June and July 2017, Mansfield 103.2 FM, a local radio station in Mansfield, Nottinghamshire, England in the United Kingdom, had its signal intruded at least eight times during outside broadcasts. During these intrusions, "The Winker's Song (Misprint)" was played. , the perpetrator had not been identified. Italian Senate and "The Tifa Incident" On Monday, January 17, 2022, an online event hosted by the Italian Senate's Movimento 5 Stelle and broadcast live to Senato della Repubblica was interrupted by roughly a minute of CGI Final Fantasy pornography, displaying the character Tifa Lockhart in the midst of explicit fornication. Overlapping the content's original audio was a man speaking English with a thick Italian accent stating, "I used to be a sex offender, but now I am a kindergarten teacher." Unlike prior broadcast signal intrusions, no satellites or stations were manipulated or otherwise tampered with. The event was hosted via Zoom, and the join information was posted publicly by a member of the Senate, allowing the perpetrator, a user called "Alex Spence," to join and immediately begin screen sharing without restriction, which was then broadcast to viewers watching on television. The act of joining Zoom sessions with malicious intent is known as "zoombombing," but the fact that the session was also broadcast live sets a new precedent. Senator Maria Laura Mantovani, following the incident, reported the intrusion to Italian authorities, and the footage from the day was subsequently removed from official channels. As of January 2022, the person's true identity remains unknown. The scenario went viral in the week following, prompting online artists and fans of the character alike to create memes in light of the event. A trend of illustrating Tifa Lockhart--in some instances, partially or fully nude--draped in an Italian flag began on Twitter, and other memes that associated her with the country in some way or fashion were also generated. The voice actor for Tifa in the pornographic video expressed her embarrassment on Twitter of "know[ing] my moaning was heard by a winner of a Nobel Peace Prize." Other incidents Television signal intrusions During the second inning from Game 1 of the 1988 World Series (known for Kirk Gibson’s famous walk-off Home Run) on October 15, 1988, an unidentified technician from NBC affiliate WMGT-TV in Macon, Georgia was fired after a 10-second sex scene from a random program was shown on-air during broadcast. The incident was recalled 3 days later that the incident is under investigation, as production manager L.A. Sturdivant recalled that the incident was just an accident. On January 3, 2007, in Australia, during a broadcast of an episode of the Canadian television series Mayday (known in Australia as Air Crash Investigation) on the Seven Network, an audio signal unexpectedly started playing, clearly saying in an American accent, "Jesus Christ, help us all, Lord." This same voice message continued to repeat itself over and over during the show for a total of six minutes. A spokesman for Seven later denied that the transmission was a prank or a security breach and claimed that the repeated line was part of the original broadcast and said, "Jesus Christ, one of the Nazarenes", although there is hardly any similarity between the two phrases. A subsequent investigation by independent researchers revealed that the invading transmission was actually from a videotaped news broadcast of a civilian truck being ambushed in the Iraq War. It remains unknown whether or not this was an intentional act of television piracy or a genuine glitch of some sort. On March 12, 2007, during a 9 p.m. airing of an Ion Life rebroadcast of a Tom Brokaw-hosted NBC special, State of U.S. Health Care, on Phoenix, Arizona TV station KPPX-TV, a station employee inserted about 30 seconds of a pornographic film into the broadcast, prompting telephone calls to local news media outlets and the local cable provider, Cox Communications. Parent company Ion Media Networks conducted a rigorous investigation into what they called "an intolerable act of human sabotage", and shortly thereafter, announced that the employee found to be responsible had been fired, threatening further legal action. On June 17, 2007, an intrusion incident occurred on Czech Television's Sunday morning programme Panorama, which shows panoramic shots of Prague and various locations across the country, especially mountain resorts. One of the cameras, located in Černý Důl in Krkonoše, had been tampered with on-site and its video stream was replaced with the hackers' own, which contained CGI of a small nuclear explosion in the local landscape, ending in white noise. The broadcast looked authentic enough; the only clue for the viewers was the Web address of the artist group Ztohoven, which had already performed several reality hacking incidents before. Czech Television considered legal action against the group, and tourism workers in the area expressed outrage (since the program serves to promote tourism in the areas shown). On July 13, 2007, a grainy photo of a man and woman interrupted Washington, D.C. ABC affiliate WJLA-TV's digital or HD signal. The picture was not transmitted over the analog signal, however. The incident was deemed a genuine signal intrusion by various websites but has since been confirmed to be the result of an older HDTV encoder malfunctioning in the early morning hours and going undetected. Station management stated that the image was from an advertisement for The Oprah Winfrey Show. On February 11, 2013, Great Falls, Montana CBS affiliate KRTV had their Emergency Alert System hijacked with a audible message warning viewers that "the bodies of the dead are rising from their graves and attacking the living." Later the same night in Marquette, Michigan and the early morning hours in La Crosse, Wisconsin, the same type of hijacking and reference to a "zombie invasion" was made over the EAS systems of CBS affiliate WKBT-DT, ABC affiliate WBUP and PBS member station WNMU during primetime programming. Shortly afterwards, PBS affiliate KENW of Portales, New Mexico was struck with a similar hacking incident, repeating similar information regarding zombies; however, this led to the arrest of the hacker of the four television stations. On February 26, 2022, the hacker group Anonymous, as part of the cyber war declared to Russia, hacked a number of pro-Kremlin TV channels (Channel One Russia, Russia-1 and others), broadcasting a poem written by the singer Monatik about the Russo-Ukrainian war with its footage and Ukrainian music Cable network feed intrusions On May 1, 2007, a Comcast headend replaced the Disney Channel's program Handy Manny with hard-core pornography for viewers in Lincroft, New Jersey. Comcast stated it was conducting an investigation into the event's cause but did not announce findings to the public. On February 1, 2009, a second Comcast headend, in Tucson, Arizona, replaced NBC affiliate KVOA's signal with graphic footage from the pornographic video Wild Cherries 5 in portions of Arizona for 28 seconds, interrupting Super Bowl XLIII between the Arizona Cardinals and the Pittsburgh Steelers during the fourth quarter. Comcast claimed "Our initial investigation suggests this was an isolated malicious act. We are conducting a thorough investigation to determine who was behind this." KVOA also announced that it will be investigating the incident. On February 4, 2011, former Cox Cable employee Frank Tanori Gonzalez was arrested by the FBI and local police in relation to the case. On April 20, 2012, three minutes of a gay pornographic film was broadcast during a morning news show on the Channel Zero-owned independent station CHCH-DT in Hamilton, Ontario, Canada for Shaw Cable viewers. The night before, a cable was cut; while it was being fixed on the morning of the incident, the adult programming was spliced into CHCH's feed. Satellite feed intrusions On September 7, 2012, the Disney Channel once again was interrupted on the Dish Network, replacing 6 minutes of Lilo & Stitch with a portion of a hardcore pornographic movie. On March 11, 2016, private satellite dish owners in Israel watching HaAh HaGadol (the Israeli version of Big Brother) on Channel 2 had their show interrupted by propaganda videos from Hamas. The disruption lasted a little over three and a half minutes. Radio signal intrusions During the 2020 United States Presidential Election, the radio station WWEI 105.5 FM (which is owned by Audacy, Inc.) (at the time Entercom) and serves Springfield, Massachusetts), was hijacked and interrupted with a voice that said "don't be a chump, vote for Trump". As of November 4, attempts to contact the FCC were unsuccessful. See also Pirate radio Pirate television Culture jamming Radio jamming References External links CBS News report on Max Headroom Chicago Takeover at YouTube Statement made by art group ZTOHOVEN regarding their attack at the public service broadcaster in the Czech Republic An artistic group interfered with the Czech TV broadcast with fictitious nuclear explosion Video of the "Telewizja Solidarność" signal intrusions at YouTube Polish Tv pirate (This page has moved) Federal Communications Commission Satellite television Cable television Broadcast engineering Security
1199875
https://en.wikipedia.org/wiki/Helen%20Coonan
Helen Coonan
Helen Lloyd Coonan (born 29 October 1947) is a former Australian politician who was a Senator for New South Wales from 1996 to 2011, representing the Liberal Party. She was a minister in the Howard Government, serving as Minister for Revenue and Assistant Treasurer from 2001 to 2004 and then as Minister for Communications, Information Technology and the Arts from 2004 to 2007. Early life Coonan was born in Mangoplah, New South Wales, attended the Mount Erin convent Catholic boarding school in Wagga Wagga. She later attended the University of Sydney, where she gained a law degree. She was a barrister and solicitor before entering politics. She was chair of the board of governors of the Law Foundation of New South Wales (1991–92). Howard Government (1996–2007) In 1996, Coonan was elected to the federal Senate as a Liberal senator for New South Wales. She was re-elected in 2001 and appointed Minister for Revenue and Assistant Treasurer (2001–04), making her the first woman to hold an Australian Treasury portfolio since Federation. As Minister for Revenue and Assistant Treasurer, Coonan appointed Mr David R Vos AM as the first Inspector-General of Taxation during August 2003; following from the passage of the Inspector-General of Taxation Bill 2002 and providing an adviser to government in the interests of taxpayers. Minister for Communications, Information Technology and the Arts Coonan was appointed Minister for Communications, Information Technology and the Arts, with a seat in the Cabinet, in July 2004. Her portfolio was responsible for overseeing the Australian broadcasting and telecommunications industries as well as the ICT sector and Australia Post. Coonan was also the senior minister responsible for the Arts. She became Deputy Leader of the Government in the Senate in January 2006, and was the first female in the Coalition Leadership team remaining in that role until the coalition was defeated at the polls on 24 November 2007. Digital television In July 2004, Australia was in a transitional phase: digital television was broadcast in parallel to the old analog television signals. The stated plan was full handover to digital by 2008 and shutdown of all analog television broadcast. In September 2005, Coonan announced a review of the digital television situation. It was noted that the government had spent $1 billion supporting the change from analog TV to digital TV. The plan to switch off analog TV by 2008 in metropolitan areas and by 2011 in regional markets was regarded as unlikely to be workable because the takeup of digital receivers had been poor. The minister supervised the creation of "Digital Australia", a new federally funded government body for coordinating the transition to digital television. Andrew Townend (former COO of Digital UK) was appointed executive director in 2007. Digital Radio In October 2005, Coonan released a plan for a staged rollout of Digital Audio Broadcasting using the "Eureka 147" technology, starting in metropolitan areas. There was no plan to phase out analog radio, but a 6-year moratorium on new BSB licenses was announced. Media Ownership Coonan was instrumental in promoting legislative changes to the cross-media and foreign ownership laws in the Broadcasting Services Amendment (Media Ownership) Bill 2006. The changes relaxed restrictions against cross-media ownership or control by a single company. Internet Filtering As far back as December 2004, Senator Coonan was under pressure from the religious conservative lobby, in particular Senator Harradine. In 1999 the government introduced measures to counter the growing problem of offensive material on the Net when it introduced a comprehensive regulatory scheme which banned X-rated and restricted classification, or RC, material. As part of the program, the government also established NetAlert—which Senator Harradine is well aware of—to help children and families use and enjoy the Internet in a safe and responsible way. Her answer demonstrated that mandatory filtering was under consideration. Senator Harradine asked about mandatory filtering systems. Under the industry code of practice introduced by the government, all Australian Internet service providers are required to provide content filters for their customers at cost price or below. These tools allow parents to actively control the access their children have to the Internet from the family computer and to have some degree of confidence about the safety of their children online. Coonan also announced in August 2007 a $189 million package for the NetAlert programme. NetAlert is an ISP level Internet content filtering system designed to filter the Internet for 'safe' use in Australia. It represented a change from her 2006 policy to allow the ISP to remain a neutral carrier and encourage parents to install filters on their home PC, commenting at the time that "PC-based filtering remains the most effective way of protecting children from offensive Internet content, as well as other threats that are not addressed by Labor's ISP-filtering proposals." Tom Wood, a 16-year-old schoolboy from Melbourne, took only 30 minutes to find a way to bypass the expensive filtering system. An additional filter was made available shortly after which Tom cracked within 40 minutes. Opposition (2007–11) Following the Liberal/National Coalition's defeat at the 2007 federal election and Brendan Nelson's election as Leader of the Liberal Party, Coonan was replaced as Deputy Leader of Coalition in the Senate by Senator Eric Abetz. She became the Shadow Minister for Human Services, shadowing the Manager of Government Business in the Senate and Minister for Human Services, Senator Joe Ludwig. When Malcolm Turnbull defeated Brendan Nelson in a leadership ballot in September 2008, Coonan was appointed Manager of Opposition Business in the Senate and was elevated to the prestigious position of Shadow Minister for Foreign Affairs. She was the first woman to shadow the portfolio. Following Deputy Leader Julie Bishop's resignation from the Shadow Treasury portfolio in February 2009, Coonan was moved into Finance, Competition Policy and Deregulation, which had been vacated by Joe Hockey, who picked up the Shadow Treasury role. With the increased responsibility of Finance, Coonan relinquished her position of Manager of Opposition Business in the Senate to Opposition Whip, Senator Stephen Parry. On 18 August 2011, Coonan informed the Senate in a speech that she would resign on 22 August. Post-parliamentary career Coonan appointed as a non-executive director of gaming company Crown Limited in 2011. She is also a member of the advisory council of investment bank J.P. Morgan & Co., a trustee of the Sydney Opera House Trust, Chair of the Conservation Council of the Opera House Trust, Co-Chair of GRACosway and a non-executive director of Obesity Australia Limited. On 1 November 2018 she commenced her role as chair of the Australian Financial Complaints Authority (AFCA), while in July 2019 she took over as chair of the Minerals Council of Australia. Coonan was a regular presenter on The Cabinet on Sky News Australia. References http://www.zdnet.com.au/news/communications/soa/Australian-broadband-among-world-s-worst-OECD/0,130061791,339280104,00.htm http://www.australianit.news.com.au/story/0,24897,22239389-15306,00.html https://web.archive.org/web/20070829134638/http://www.pm.gov.au/media/Release/2007/Media_Release24485.cfm http://www.news.com.au/heraldsun/story/0,21985,22218993-661,00.html External links Helen Coonan, Senate Biography Sydney University Alumni magazine article Helen Coonan on Facebook |- |- 1947 births Living people Liberal Party of Australia members of the Parliament of Australia Members of the Cabinet of Australia Members of the Australian Senate Members of the Australian Senate for New South Wales Women members of the Australian Senate Australian barristers University of Sydney alumni People from Wagga Wagga Women government ministers of Australia 21st-century Australian politicians 21st-century Australian women politicians 20th-century Australian politicians Government ministers of Australia 20th-century Australian women politicians
27598453
https://en.wikipedia.org/wiki/Oscilloscope%20types
Oscilloscope types
This is a subdivision of the Oscilloscope article, discussing the various types and models of oscilloscopes in greater detail. Digital oscilloscopes While analog devices make use of continually varying voltages, digital devices employ binary numbers which correspond to samples of the voltage. In the case of digital oscilloscopes, an analog-to-digital converter (ADC) is used to change the measured voltages into digital information. Waveforms are taken as a series of samples. The samples are stored, accumulating until enough are taken in order to describe the waveform, which are then reassembled for display. Digital technology allows the information to be displayed with brightness, clarity, and stability. There are, however, limitations as with the performance of any oscilloscope. The highest frequency at which the oscilloscope can operate is determined by the analog bandwidth of the front-end components of the instrument and the sampling rate. Digital oscilloscopes can be classified into two primary categories: digital storage oscilloscopes and digital sampling oscilloscopes. Newer variants include PC-based oscilloscopes (which attach to a PC for data processing and display) and mixed-signal oscilloscopes (which employ other functions in addition to voltage measurement). The digital storage oscilloscope, or DSO for short, is now the preferred type for most industrial applications. Instead of storage-type cathode ray tubes, DSOs use digital memory, which can store data as long as required without degradation. A digital storage oscilloscope also allows complex processing of the signal by high-speed digital signal processing circuits. The vertical input is digitized by an analog-to-digital converter to create a data set that is stored in the memory of a microprocessor. The data set is processed and then sent to the display, which in early DSOs was a cathode ray tube, but today is a LCD flat panel. DSOs with color LCD displays are common. The sampling data set can be stored to internal or removable storage or sent over a LAN or USB for processing or archiving. A screen image can also be saved to internal or removable storage, or sent to a built-in or externally connected printer, without the need for an oscilloscope camera. The oscilloscope's own signal analysis software can extract many useful time-domain features (e.g., rise time, pulse width, amplitude), frequency spectra, histograms and statistics, persistence maps, and a large number of parameters meaningful to engineers in specialized fields such as telecommunications, disk drive analysis and power electronics.. Digital oscilloscopes are limited principally by the performance of the analog input circuitry, the duration of the sample window, and resolution of the sample rate. When not using equivalent-time sampling, the sampling frequency should be higher than the Nyquist rate which is double the frequency of the highest-frequency component of the observed signal, otherwise aliasing occurs. Advantages over the analog oscilloscope are: Brighter and bigger display with color to distinguish multiple traces Simple one-shot acquisitions into memory without the issues that come with storage-type CRTs Much more versatile triggers No hiding of noise in the phosphor gloom as it happens on analog oscilloscopes The input signal is not just converted into a line on the screen, it is available as sample data which can be stored or further processed (i.e. through measurements and analysis tools that come with the oscilloscope) Averaging across consecutive samples or scans as well as specific HiRes modes which work through oversampling can lead to higher vertical resolution Versatile measurement and analysis functions make it easy to gather all relevant signal properties Peak detection to find specific events at long timebase settings on digital oscilloscopes with small memory (less relevant as newer oscilloscopes now come with large memories that keep the sample rate sufficiently high even at very long timebase settings) Easy pan and zoom Remote control via USB, Ethernet or GPIB A disadvantage of older digital oscilloscopes is the limited waveform update rate (trigger rate) compared to their analog predecessors, which can make it difficult to spot "glitches" or other rare phenomena with digital oscilloscopes, especially older ones that have no persistence mode. However, thanks to improvements in waveform processing, newer digital oscilloscopes can reach trigger rates in excess of 1 million updates/second, which is more than the roughly 600,000 triggers/sec the best analog oscilloscopes were able to do. Newer digital oscilloscopes also come with analog persistence modes, which replicate the afterglow of an analog oscilloscope's phosphor CRT. Digital sampling oscilloscopes Digital sampling oscilloscopes operate on the same principle as analog sampling oscilloscopes and, like their analog counterparts, are of great use when analyzing high-frequency signals; that is, repetitive signals whose frequencies are higher than the oscilloscope's sampling rate. For measuring repetitive signals, this type once used to offer bandwidth and high-speed timing up to ten times greater than any real-time oscilloscope. A real-time oscilloscope, which also used to be called a “single-shot” scope, captures an entire waveform on each trigger event. This requires the scope to capture a large number of data points in one continuous record. A sequential equivalent-time sampling oscilloscope, sometimes simply called a “sampling scope,” measures the input signal only once per trigger. The next time the scope is triggered, a small delay is added and another sample is taken. Thus a large number of trigger events must occur in order to collect enough samples to build a picture of the waveform. The measurement bandwidth is determined by the frequency response of the sampler which currently can extend beyond 90 GHz. An alternative to sequential equivalent-time sampling is called random equivalent-time sampling. Samples are synchronised not with trigger events but with the scope's internal sampling clock. This causes them to occur at apparently random times relative to the trigger event. The scope measures the time interval between the trigger and each sample, and uses this to locate the sample correctly on the x-axis. This process continues until enough samples have been collected to build up a picture of the waveform. The advantage of this technique over sequential equivalent-time sampling is that the scope can collect data from before the trigger event as well as after it, in a similar way to the pre-trigger function of most real-time digital storage scopes. Random equivalent-time sampling can be integrated into a standard DSO without requiring special sampling hardware, but has the disadvantage of poorer timing precision than the sequential sampling method. However, due to the progress in ADC technology which has led to real-time oscilloscopes with bandwidths over 100 GHz, the demand for digital sampling oscilloscopes has been shrinking, as has the need for integrating equivalent time sampling in real-time oscilloscopes. Handheld oscilloscopes Handheld oscilloscopes are useful for many test and field service applications. Today, a hand-held oscilloscope is usually a real-time oscilloscope, using a monochrome or color LCD display. Typically, a hand-held oscilloscope has one or two analog input channels, but four-input-channel versions are also available. Some instruments combine the functions of a digital multimeter with the oscilloscope. These usually are lightweight and have good accuracy. PC-based oscilloscopes A PC-based oscilloscope is a type of digital oscilloscope which relies on a standard PC platform for waveform display and instrument control. In general, there are two types of PC-based oscilloscopes Standalone oscilloscopes which contain an internal PC platform (PC mainboard) – common with upper mid-range and high-end oscilloscopes External oscilloscopes which connect via USB or Ethernet to a separate PC (desktop or laptop) At the end of the 1990s, Nicolet and HP introduced the first standalone PC-based oscilloscopes, where the "oscilloscope" part consisted of a specialized signal acquisition system, consisting of an electrical interface providing isolation and automatic gain controls, high-speed analog-to-digital converters, sample memory and on-board Digital Signal Processor (DSPs). The PC part ran Microsoft Windows as the operating system with an oscilloscope application on top, which displayed the waveform data and was used to control the instrument. Since then, the high-end lines of standalone oscilloscopes of all four major oscilloscope manufacturers (HP/Agilent/Keysight, LeCroy, Tektronix, Rohde & Schwarz) have been based on a PC platform. The other group of PC-based oscilloscopes are the external oscilloscopes, i.e. where the acquisition system is physically separate from the PC platform. Depending on the exact hardware configuration of the external oscilloscope, the hardware also could be described as a digitizer, a data logger or as a part of a specialized automatic control system. The separate PC provides the display, control interface, disc storage, networking and often the electrical power for the acquisition hardware. The external oscilloscope can transfer data to the computer in two main ways – streaming and block mode. In streaming mode the data is transferred to the PC in a continuous flow without any loss of data. The way in which the PCO is connected to the PC (e.g. Ethernet, USB etc.) will dictate the maximum achievable speed and thereby frequency and resolution using this method. Block mode utilizes the on-board memory of the external oscilloscope to collect a block of data which is then transferred to the PC after the block has been recorded. The acquisition hardware then resets and records another block of data. This process happens very quickly, but the time taken will vary according to the size of the block of data and the speed at which it can be transferred. This method enables a much higher sampling speed, but in many cases the hardware will not record data whilst it is transferring the existing block. The advantages of a standalone PC-based oscilloscopes include: Easy exporting of data to standard PC software such as spreadsheets and word processors which can run on the oscilloscope Ability to run analysis tools like numerical analysis software and or signal analysis software directly on the oscilloscope Ability to run automation software to perform automatic tests Ability to easily control the oscilloscope from a remote location via networking The advantages of external oscilloscopes are the same as for standalone PC-based oscilloscopes, plus in addition: Costs are often lower than for a comparable stand-alone oscilloscope, especially if the user already owns a suitable PC or laptop Standalone PCs and laptops typically have large high-resolution color displays which can be easier to read than the smaller displays found on conventional oscilloscopes. Portability when used with a laptop PC Some external oscilloscopes are much smaller physically than even handheld oscilloscopes However, PC-based oscilloscopes, standalone or external, also have some disadvantages, which include: Power-supply and electromagnetic noise from PC circuits, which requires careful and extensive shielding to obtain good low-level signal resolution For external oscilloscopes, the need for the owner to install oscilloscope software on the PC, which may not be compatible with the current release of the PC operating system Time for the PC platform to boot, compared with the almost instant start-up of a standalone oscilloscope based on an embedded platform (although every oscilloscope will require a warm-up period to reach specification compliance so this should rarely be an issue) Mixed-signal oscilloscopes A mixed-signal oscilloscope (MSO) combines all the measurement capabilities and the use model of a Digital Storage Oscilloscope with some of the measurement capabilities of a logic analyzer. Analog and digital signals are acquired with a single time base, they are viewed on a single display, and any combination of these signals can be used to trigger the oscilloscope. MSOs typically lack the advanced digital measurement capabilities and the large number of digital acquisition channels of standalone logic analyzers. Typical mixed-signal measurement uses include the characterization and debugging of hybrid analog/digital circuits like for example embedded systems, Analog-to-digital converters (ADCs), Digital-to-analog converters (DACs), and control systems. Cathode-ray oscilloscope The earliest and simplest type of oscilloscope consisted of a cathode ray tube, a vertical amplifier, a timebase, a horizontal amplifier and a power supply. These are now called "analog" oscilloscopes to distinguish them from the "digital" oscilloscopes that became common in the 1990s and 2000s. Before the introduction of the CRO in its current form, the cathode ray tube had already been in use as a measuring device. The cathode ray tube is an evacuated glass envelope, similar to that in a black-and-white television set, with its flat face covered in a fluorescent material (the phosphor). The screen is typically less than 20 cm in diameter, much smaller than the one in a television set. Older CROs had round screens or faceplates, while newer CRTs in better CROs have rectangular faceplates. In the neck of the tube is an electron gun, which is a small heated metal cylinder with a flat end coated with electron-emitting oxides. Close to it is a much-larger-diameter cylinder carrying a disc at its cathode end with a round hole in it; it's called a "grid" (G1), by historic analogy with amplifier vacuum-tube grids. A small negative grid potential (referred to the cathode) is used to block electrons from passing through the hole when the electron beam needs to be turned off, as during sweep retrace or when no trigger events occur. However, when G1 becomes less negative with respect to the cathode, another cylindrical electrode designated G2, which is hundreds of volts positive referred to the cathode, attracts electrons through the hole. Their trajectories converge as they pass through the hole, creating quite-small diameter "pinch" called the crossover. Following electrodes ("grids"), electrostatic lenses, focus this crossover onto the screen; the spot is an image of the crossover. Typically, the CRT runs at roughly -2 kV or so, and various methods are used to correspondingly offset the G1 voltage. Proceeding along the electron gun, the beam passes through the imaging lenses and first anode, emerging with an energy in electron-volts equal to that of the cathode. The beam passes through one set of deflection plates , then the other, where it is deflected as required to the phosphor screen. The average voltage of the deflection plates is relatively close to ground, because they have to be directly connected to the vertical output stage. By itself, once the beam leaves the deflection region, it can produce a usefully bright trace. However, for higher bandwidth CROs where the trace may move more rapidly across the phosphor screen, a positive post-deflection acceleration ("PDA") voltage of over 10,000 volts is often used, increasing the energy (speed) of the electrons that strike the phosphor. The kinetic energy of the electrons is converted by the phosphor into visible light at the point of impact. When switched on, a CRT normally displays a single bright dot in the center of the screen, but the dot can be moved about electrostatically or magnetically. The CRT in an oscilloscope always uses electrostatic deflection. Ordinary electrostatic deflection plates can typically move the beam roughly only 15 degrees or so off-axis, which means that oscilloscope CRTs have long, narrow funnels, and for their screen size, are usually quite long. It's the CRT length that makes CROs "deep", from front to back. Modern flat-panel oscilloscopes have no need for such rather-extreme dimensions; their shapes tend to be more like one kind of rectangular lunchbox. Between the electron gun and the screen are two opposed pairs of metal plates called the deflection plates. The vertical amplifier generates a potential difference across one pair of plates, giving rise to a vertical electric field through which the electron beam passes. When the plate potentials are the same, the beam is not deflected. When the top plate is positive with respect to the bottom plate, the beam is deflected upwards; when the field is reversed, the beam is deflected downwards. The horizontal amplifier does a similar job with the other pair of deflection plates, causing the beam to move left or right. This deflection system is called electrostatic deflection, and is different from the electromagnetic deflection system used in television tubes. In comparison to magnetic deflection, electrostatic deflection can more readily follow random and fast changes in potential, but is limited to small deflection angles. Common representations of deflection plates are misleading. For one, the plates for one deflection axis are closer to the screen than the plates for the other. Plates that are closer together provide better sensitivity, but they also need to be extend far enough along the CRT's axis to obtain adequate sensitivity. (The longer the time a given electron spends in the field, the farther it's deflected.) However, closely spaced long plates would cause the beam to contact them before full amplitude deflection occurs, so the compromise shape has them relatively close together toward the cathode, and flared apart in a shallow vee toward the screen. They are not flat in any but quite-old CRTs! The timebase is an electronic circuit that generates a ramp voltage. This is a voltage that changes continuously and linearly with time. When it reaches a predefined value the ramp is reset and settles to its starting value. When a trigger event is recognized, provided the reset process (holdoff) is complete, the ramp starts again. The timebase voltage usually drives the horizontal amplifier. Its effect is to sweep the screen end of the electron beam at a constant speed from left to right across the screen, then blank the beam and return its deflection voltages to the left, so to speak, in time to begin the next sweep. Typical sweep circuits can take significant time to reset; in some CROs, fast sweeps required more time to retrace than to sweep. Meanwhile, the vertical amplifier is driven by an external voltage (the vertical input) that is taken from the circuit or experiment that is being measured. The amplifier has a very high input impedance, typically one megohm, so that it draws only a tiny current from the signal source. Attenuator probes reduce the current drawn even more. The amplifier drives the vertical deflection plates with a voltage that is proportional to the vertical input. Because the electrons have already been accelerated by typically 2kV (roughly), this amplifier also has to deliver almost a hundred volts, and this with a very wide bandwidth. The gain of the vertical amplifier can be adjusted to suit the amplitude of the input voltage. A positive input voltage bends the electron beam upwards, and a negative voltage bends it downwards, so that the vertical deflection at any part of the trace shows the value of the input at that time. The response of any oscilloscope is much faster than that of mechanical measuring devices such as the multimeter, where the inertia of the pointer (and perhaps damping) slows down its response to the input. Observing high speed signals, especially non-repetitive signals, with a conventional CRO is difficult, due to non-stable or changing triggering threshold which makes it hard to "freeze" the waveform on the screen. This often requires the room to be darkened or a special viewing hood to be placed over the face of the display tube. To aid in viewing such signals, special oscilloscopes have borrowed from night vision technology, employing a microchannel plate electron multiplier behind the tube face to amplify faint beam currents. Although a CRO allows one to view a signal, in its basic form it has no means of recording that signal on paper for the purpose of documentation. Therefore, special oscilloscope cameras were developed to photograph the screen directly. Early cameras used roll or plate film, while in the 1970s Polaroid instant cameras became popular. A P11 CRT phosphor (visually blue) was especially effective in exposing film. Cameras (sometimes using single sweeps) were used to capture faint traces. The power supply is an important component of the oscilloscope. It provides low voltages to power the cathode heater in the tube (isolated for high voltage!), and the vertical and horizontal amplifiers as well as the trigger and sweep circuits. Higher voltages are needed to drive the electrostatic deflection plates, which means that the output stage of the vertical deflection amplifier has to develop large signal swings. These voltages must be very stable, and amplifier gain must be correspondingly stable. Any significant variations will cause errors in the size of the trace, making the oscilloscope inaccurate. Later analog oscilloscopes added digital processing to the standard design. The same basic architecture — cathode ray tube, vertical and horizontal amplifiers — was retained, but the electron beam was controlled by digital circuitry that could display graphics and text mixed with the analog waveforms. Display time for those was interleaved — multiplexed — with waveform display in basically much the same way that a dual/multitrace oscilloscope displays its channels. The extra features that this system provides include: on-screen display of amplifier and timebase settings; voltage cursors — adjustable horizontal lines with voltage display; time cursors — adjustable vertical lines with time display; on-screen menus for trigger settings and other functions. automatic measurement of voltage and frequency of a displayed trace Dual-beam oscilloscope A dual-beam oscilloscope was a type of oscilloscope once used to compare one signal with another. There were two beams produced in a special type of CRT. Unlike an ordinary "dual-trace" oscilloscope (which time-shared a single electron beam, thus losing about 50% of each signal), a dual-beam oscilloscope simultaneously produced two separate electron beams, capturing the entirety of both signals. One type (Cossor, UK) had a beam-splitter plate in its CRT, and single-ended vertical deflection following the splitter. (There is more about this type of oscilloscope near the end of this article.) Other dual-beam oscilloscopes had two complete electron guns, requiring tight control of axial (rotational) mechanical alignment in manufacturing the CRT. In the latter type, two independent pairs of vertical plates deflect the beams. Vertical plates for channel A had no effect on channel B's beam. Similarly for channel B, separate vertical plates existed which deflected the B beam only. On some dual-beam oscilloscopes the time base, horizontal plates and horizontal amplifier were common to both beams (the beam-splitter CRT worked this way). More elaborate oscilloscopes like the Tektronix 556 and 7844 could employ two independent time bases and two sets of horizontal plates and horizontal amplifiers. Thus one could look at a very fast signal on one beam and a slow signal on another beam. Most multichannel oscilloscopes do not have multiple electron beams. Instead, they display only one trace at a time, but switch the later stages of the vertical amplifier between one channel and the other either on alternate sweeps (ALT mode) or many times per sweep (CHOP mode). Very few true dual-beam oscilloscopes were built. With the advent of digital signal capture, true dual-beam oscilloscopes became obsolete, as it was then possible to display two truly simultaneous signals from memory using either the ALT or CHOP display technique, or even possibly a raster display mode. Analog storage oscilloscope Trace storage is an extra feature available on some analog oscilloscopes; they used direct-view storage CRTs. Storage allows the trace pattern that normally decays in a fraction of a second to remain on the screen for several minutes or longer. An electrical circuit can then be deliberately activated to store and erase the trace on the screen. The storage is accomplished using the principle of secondary emission. When the ordinary writing electron beam passes a point on the phosphor surface, not only does it momentarily cause the phosphor to illuminate, but the kinetic energy of the electron beam knocks other electrons loose from the phosphor surface. This can leave a net positive charge. Storage oscilloscopes then provide one or more secondary electron guns (called the "flood guns") that provide a steady flood of low-energy electrons traveling towards the phosphor screen. Flood guns cover the entire screen, ideally uniformly. The electrons from the flood guns are more strongly drawn to the areas of the phosphor screen where the writing gun has left a net positive charge; in this way, the electrons from the flood guns re-illuminate the phosphor in these positively charged areas of the phosphor screen. If the energy of the flood gun electrons is properly balanced, each impinging flood gun electron knocks out one secondary electron from the phosphor screen, thus preserving the net positive charge in the illuminated areas of the phosphor screen. In this way, the image originally written by the writing gun can be maintained for a long time — many seconds to a few minutes. Eventually, small imbalances in the secondary emission ratio cause the entire screen to "fade positive" (light up) or cause the originally written trace to "fade negative" (extinguish). It is these imbalances that limit the ultimate storage time possible. Storage oscilloscopes (and large-screen storage CRT displays) of this type, with storage at the phosphor, were made by Tektronix. Other companies, notably Hughes, earlier made storage oscilloscopes with a more-elaborate and costly internal storage structure. Some oscilloscopes used a strictly binary (on/off) form of storage known as "bistable storage". Others permitted a constant series of short, incomplete erasure cycles which created the impression of a phosphor with "variable persistence". Certain oscilloscopes also allowed the partial or complete shutdown of the flood guns, allowing the preservation (albeit invisibly) of the latent stored image for later viewing. (Fading positive or fading negative only occurs when the flood guns are "on"; with the flood guns off, only leakage of the charges on the phosphor screen degrades the stored image. Analog sampling oscilloscope The principle of sampling was developed during the 1930s in Bell Laboratories by Nyquist, after whom the sampling theorem is named. The first sampling oscilloscope was, however, developed in the late 1950s at the Atomic Energy Research Establishment at Harwell in England by G.B.B. Chaplin, A.R. Owens and A.J. Cole. ["A Sensitive Transistor Oscillograph With DC to 300 Mc/s Response", Proc I.E.E. (London) Vol.106, Part B. Suppl., No. 16, 1959]. The first sampling oscilloscope was an analog instrument, originally developed as a front-end unit for a conventional oscilloscope. The need for this instrument grew out of the requirement of nuclear scientists at Harwell to capture the waveform of very fast repetitive pulses. The current state-of-the-art oscilloscopes — with bandwidths of typically 20 MHz — were not able to do this and the 300 MHz effective bandwidth of their analog sampling oscilloscope represented a considerable advance. A short series of these "front-ends" was made at Harwell and found much use, and Chaplin et al. patented the invention. Commercial exploitation of this patent was ultimately done by the Hewlett-Packard Company (later Agilent Technologies). Sampling oscilloscopes achieve their large bandwidths by not taking the entire signal at a time. Instead, only a sample of the signal is taken. The samples are then assembled to create the waveform. This method can only work for repetitive signals, not transient events. The idea of sampling can be thought of as a stroboscopic technique. When using a strobe light, only pieces of the motion are seen, but when enough of these images are taken, the overall motion can be captured Related instruments A large number of instruments used in a variety of technical fields are really oscilloscopes with inputs, calibration, controls, display calibration, etc., specialized and optimized for a particular application. In some cases additional functions such as a signal generator are built into the instrument to facilitate measurements that would otherwise require one or more additional instruments. The waveform monitor in television broadcast engineering is very close to a standard oscilloscope, but it includes triggering circuits and controls that allow a stable display of a composite video frame, field, or even a selected line out of a field. Robert Hartwig explains the waveform monitor as "providing a graphic display of the black-and-white portion of the picture." The black-and-white portion of the video signal is called the "luminance" due to its fluorescent complexion. The waveform monitor's display of black vs. white levels allows the engineer to troubleshoot the quality of the picture and be certain that it is within the required standards. For convenience, the vertical scale of the waveform monitor is calibrated in IRE units. See also Mechanical oscilloscopes References Electronic test equipment
1693594
https://en.wikipedia.org/wiki/AutoKey
AutoKey
AutoKey is a free, open-source scripting application for Linux. AutoKey allows the user to define hotkeys and trigger phrases which expand to predefined text, automating frequent or repetitive tasks such as correcting typographical errors or common spelling mistakes and inserting boiler plate sections of text. Hotkeys, and trigger phrases may also be configured to run scripts which use the full power of Python 3 to perform actions which can generate window, keyboard, and mouse events using the provided AutoKey API. Most applications will respond to these events as if the user were actually typing on the keyboard and using the mouse. This allows AutoKey scripts to cause these applications to perform almost any sequence of actions a user could manually make them do - at the press of a single hotkey. Since AutoKey scripts are written in full Python, they can also interact with the whole system to perform tasks which would be difficult for a user to do manually such as reading and writing files, retrieving system status information, or performing calculations and making decisions. It also provides simple dialog management tools so scripts can present information and interact with the user. History In 2008, Chris Dekter wrote the original version of AutoKey in Python 2 for the Linux operating system. The last version of the original branch was released on and is deprecated. On , Guoci released the first Python 3 version. Phrase expansion was inspired by the commercial Windows software 'PhraseExpress'. AutoKey is currently available in packaged form for users of Debian, Arch, Gentoo, and Fedora as well as for some of their derivative distributions such as Ubuntu, Mint, and Manjaro. The software is licensed under GNU General Public License (GPLv3). See also AutoHotkey (for Windows only) AutoIt (for Windows only) Automator (for Macintosh only) Bookmarklet iMacros for Firefox Xnee, a program that can be used to record and replay test. SikuliX References External links Linux Journal Review Python 3 AutoKey on GitHub AutoKey wiki on GitHub AutoKey current version in .deb packaging AutoKey Support Forum Article in Linux Uprising Automation software Free system software Free software programmed in Python
20227129
https://en.wikipedia.org/wiki/Oklahoma%20Office%20of%20Management%20and%20Enterprise%20Services
Oklahoma Office of Management and Enterprise Services
The Oklahoma Office of Management and Enterprise Services (OMES) is a government agency which manages and supports the basic functioning of the government of Oklahoma. Under the leadership of the Oklahoma Secretary of Agency Accountability, OMES provides financial, property, purchasing, human resources and information technology services to all state agencies, and assists the Governor of Oklahoma on developing the annual state budget. Originally called the Office of State Finance, the agency was renamed to its current name in 2012. The Office is headed by a director who is appointed by the governor with the approval of the Oklahoma Senate, to serve at the pleasure of the governor. Within the office is the state's chief information officer, who is also appointed by the governor, who oversees the state's information technology systems. The current OMES director is Steve Harpe, who was appointed by Governor Kevin Stitt in 2020. Harpe replaced John Budd, the state’s Chief Operating Officer who had been serving in a dual capacity. The Office of Management and Enterprise Services was created in 2012 during the term of Governor Mary Fallin. Overview History The Office of State Finance was created in 1947 by Governor of Oklahoma Robert S. Kerr to replace the State Budget Office. In April 2010, Governor Brad Henry appointed the Oklahoma's first chief information officer following legislation passed in the last session of 2009 modernizing Oklahoma's state government information technology system. Part of the CIO's responsibilities is to identify synergies possible through the reduction of duplicate systems and centralizing IT infrastructure. The Office of State Finance was significantly reformed in 2011 when Governor Mary Fallin signed the Government Administrative Process Consolidation and Reorganization Reform Act of 2011. Pursuant to that Act, several agencies were consolidated into OSF, including the Oklahoma Department of Central Services, the Oklahoma Office of Personnel Management, the Oklahoma State Employees Benefits Council, and the Oklahoma State and Education Employees Group Insurance Board. All the powers and duties of those agencies were vested in the director of OSF and the individual agencies became subsidiary divisions of the OSF, which was renamed the Office of Management and Enterprise Services. External links Official Oklahoma Office of Management and Enterprise Services website Link to House Bill 1304 Link to House Bill 2140 Consolidation Report State CIO Salaries Information Recovery.org - Oklahoma Office of Management and Enterprise Services Award Information Capgemini Assessment of Oklahoma Capgemini Information Finance, Office of State Government agencies established in 2011
7675393
https://en.wikipedia.org/wiki/Aelita%20Software%20Corporation
Aelita Software Corporation
Aelita Software Corporation is a software company that provided enterprise network management tools for improved security, usability, and control. Aelita software has developed solutions for Microsoft Windows 2000 and 2003 migration and domain reconfiguration, security analysis, directory management, systems monitoring, and configuration recovery. Examples of Aelita Software products include Aelita Domain Migration Wizard, Aelita Exchange Migration Wizard, Aelita InTrust, Aelita Archive Manager, and others. The company was acquired by Quest Software in March 2004 for about $115 million. Aelita Software products and technology have been incorporated into Quest's Windows Management products. Much of the management from Aelita transitioned to Veeam, which launched in 2006. In 2012, Quest Software was acquired by Dell, for $2.36 billion to form the Dell Software. In June 2016, Dell announced the sale of their software division, including the Quest business, to Francisco Partners and Elliott Management Corporation. On November 1, 2016, the sale of Dell Software to Francisco Partners and Elliott Management was completed and the company was re-launched as Quest Software. References External links Quest Software website Software companies based in Ohio Companies based in Dublin, Ohio Software companies established in 1998 Companies established in 1998 1998 establishments in the United States 1998 establishments in Ohio Quest Software 2004 mergers and acquisitions Software companies of the United States
136764
https://en.wikipedia.org/wiki/IBM%20Blue%20Gene
IBM Blue Gene
Blue Gene is an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption. The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. During their deployment, Blue Gene systems often led the TOP500 and Green500 rankings of the most powerful and most power efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list. The project was awarded the 2009 National Medal of Technology and Innovation. As of 2015, IBM seems to have ended the development of the Blue Gene family though no public announcement has been made. IBM's continuing efforts of the supercomputer scene seems to be concentrated around OpenPower, using accelerators such as FPGAs and GPUs to battle the end of Moore's law. History In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding. The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T.J. Watson Research Center and led by William R. Pulleyblank. At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64. In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS. It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 PetaFLOPS mark. The system was built in Rochester, MN IBM plant. While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center. While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real-world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize. In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox). At Supercomputing 2006, Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards. In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds). The name The name Blue Gene comes from what it was originally designed to do, help biologists understand the processes of protein folding and gene development. "Blue" is a traditional moniker that IBM uses for many of its products and the company itself. The original Blue Gene design was renamed "Blue Gene/C" and eventually Cyclops64. The "L" in Blue Gene/L comes from "Light" as that design's original name was "Blue Light". The "P" version was designed to be a petascale design. "Q" is just the letter after "P". There is no Blue Gene/R. Major features The Blue Gene/L supercomputer was unique in the following aspects: Trading the speed of processors for lower power consumption. Blue Gene/L used low frequency and low power embedded PowerPC cores with floating point accelerators. While the performance of each chip was relatively low, the system could achieve better power efficiency for applications that could use large numbers of nodes. Dual processors per node with two working modes: co-processor mode where one processor handles computation and the other handles communication; and virtual-node mode, where both processors are available to run user code, but the processors share both the computation and the communication load. System-on-a-chip design. Components were embedded on a single chip for each node, with the exception of 512 MB external DRAM. A large number of nodes (scalable in increments of 1024 up to at least 65,536) Three-dimensional torus interconnect with auxiliary networks for global communications (broadcast and reductions), I/O, and management Lightweight OS per node for minimum system overhead (system noise). Architecture The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Blue Gene/L Compute or I/O node was a single ASIC with associated DRAM memory chips. The ASIC integrated two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one another. Compute nodes were packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There were 32 node boards per cabinet/rack. By the integration of all essential sub-systems on a single chip, and the use of low-power logic, each Compute or I/O node dissipated low power (about 17 watts, including DRAMs). This allowed aggressive packaging of up to 1024 compute nodes, plus additional I/O nodes, in a standard 19-inch rack, within reasonable limits of electrical power supply and air cooling. The performance metrics, in terms of FLOPS per watt, FLOPS per m2 of floorspace and FLOPS per unit cost, allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate faulty components, down to a granularity of half a rack (512 compute nodes), to allow the machine to continue to run. Each Blue Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided communication to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provided access to any node for configuration, booting and diagnostics. To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positive integer power of 2, with at least 25 = 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use. Blue Gene/L compute nodes used a minimal operating system supporting a single user program. Only a subset of POSIX calls was supported, and only one process could run at a time on node in co-processor mode—or one process per CPU in virtual mode. Programmers needed to implement green threads in order to simulate local concurrency. Application development was usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby and Python have been ported to the compute nodes. IBM has published BlueMatter, the application developed to exercise Blue Gene/L, as open source here. This serves to document how the torus and collective interfaces were used by applications, and may serve as a base for others to exercise the current generation of supercomputers. Blue Gene/P In June 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory's Leadership Computing Facility. Design The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains four PowerPC 450 processor cores, running at 850 MHz. The cores are cache coherent and the chip can operate as a 4-way symmetric multiprocessor (SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dual DDR2 memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a "compute node". A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores). By using many small, low-power, densely packaged chips, Blue Gene/P exceeded the power efficiency of other supercomputers of its generation, and at 371 MFLOPS/W Blue Gene/P installations ranked at or near the top of the Green500 lists in 2007-2008. Installations The following is an incomplete list of Blue Gene/P installations. Per November 2009, the TOP500 list contained 15 Blue Gene/P installations of 2-racks (2048 nodes, 8192 processor cores, 23.86 TFLOPS Linpack) and larger. On November 12, 2007, the first Blue Gene/P installation, JUGENE, with 16 racks (16,384 nodes, 65,536 processors) was running at Forschungszentrum Jülich in Germany with a performance of 167 TFLOPS. When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world. In 2009, JUGENE was upgraded to 72 racks (73,728 nodes, 294,912 processor cores) with 144 terabytes of memory and 6 petabytes of storage, and achieved a peak performance of 1 PetaFLOPS. This configuration incorporated new air-to-water heat exchangers between the racks, reducing the cooling cost substantially. JUGENE was shut down in July 2012 and replaced by the Blue Gene/Q system JUQUEEN. The 40-rack (40960 nodes, 163840 processor cores) "Intrepid" system at Argonne National Laboratory was ranked #3 on the June 2008 Top 500 list. The Intrepid system is one of the major resources of the INCITE program, in which processor hours are awarded to "grand challenge" science and engineering projects in a peer-reviewed competition. Lawrence Livermore National Laboratory installed a 36-rack Blue Gene/P installation, "Dawn", in 2009. The King Abdullah University of Science and Technology (KAUST) installed a 16-rack Blue Gene/P installation, "Shaheen", in 2009. In 2012, a 6-rack Blue Gene/P was installed at Rice University and will be jointly administered with the University of São Paulo. A 2.5 rack Blue Gene/P system is the central processor for the Low Frequency Array for Radio astronomy (LOFAR) project in the Netherlands and surrounding European countries. This application uses the streaming data capabilities of the machine. A 2-rack Blue Gene/P was installed in September 2008 in Sofia, Bulgaria, and is operated by the Bulgarian Academy of Sciences and Sofia University. In 2010, a 2-rack (8192-core) Blue Gene/P was installed at the University of Melbourne for the Victorian Life Sciences Computation Initiative. In 2011, a 2-rack Blue Gene/P was installed at University of Canterbury in Christchurch, New Zealand. In 2012, a 2-rack Blue Gene/P was installed at Rutgers University in Piscataway, New Jersey. It was dubbed "Excalibur" as an homage to the Rutgers mascot, the Scarlet Knight. In 2008, a 1-rack (1024 nodes) Blue Gene/P with 180 TB of storage was installed at the University of Rochester in Rochester, New York. The first Blue Gene/P in the ASEAN region was installed in 2010 at the Universiti of Brunei Darussalam’s research centre, the UBD-IBM Centre. The installation has prompted research collaboration between the university and IBM research on climate modeling that will investigate the impact of climate change on flood forecasting, crop yields, renewable energy and the health of rainforests in the region among others. In 2013, a 1-rack Blue Gene/P was donated to the Department of Science and Technology for weather forecasts, disaster management, precision agriculture, and health it is housed in the National Computer Center, Diliman, Quezon City, under the auspices of Philippine Genome Center (PGC) Core Facility for Bioinformatics (CFB) at UP Diliman, Quezon City. Applications Veselin Topalov, the challenger to the World Chess Champion title in 2010, confirmed in an interview that he had used a Blue Gene/P supercomputer during his preparation for the match. The Blue Gene/P computer has been used to simulate approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The IBM Kittyhawk project team has ported Linux to the compute nodes and demonstrated generic Web 2.0 workloads running at scale on a Blue Gene/P. Their paper, published in the ACM Operating Systems Review, describes a kernel driver that tunnels Ethernet over the tree network, which results in all-to-all TCP/IP connectivity. Running standard Linux software like MySQL, their performance results on SpecJBB rank among the highest on record. In 2011, a Rutgers University / IBM / University of Texas team linked the KAUST Shaheen installation together with a Blue Gene/P installation at the IBM Watson Research Center into a "federated high performance computing cloud", winning the IEEE SCALE 2011 challenge with an oil reservoir optimization application. Blue Gene/Q The third supercomputer design in the Blue Gene series, Blue Gene/Q has a peak performance of 20 Petaflops, reaching LINPACK benchmarks performance of 17 Petaflops. Blue Gene/Q continues to expand and enhance the Blue Gene/L and /P architectures. Design The Blue Gene/Q Compute chip is an 18 core chip. The 64-bit A2 processor cores are 4-way simultaneously multithreaded, and run at 1.6 GHz. Each processor core has a SIMD Quad-vector double precision floating point unit (IBM QPX). 16 Processor cores are used for computing, and a 17th core for operating system assist functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare, used to increase manufacturing yield. The spared-out core is shut down in functional operation. The processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache, operating at half core speed. The L2 cache is multi-versioned, supporting transactional memory and speculative execution, and has hardware support for atomic operations. L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core). A Q32 compute drawer contains 32 compute cards, each water cooled. A "midplane" (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores and 16 TB RAM. Separate I/O drawers, placed at the top of a rack or in a separate rack, are air cooled and contain 8 compute cards and 8 PCIe expansion slots for InfiniBand or 10 Gigabit Ethernet networking. Performance At the time of the Blue Gene/Q system announcement in November 2011, an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W. In June 2012, Blue Gene/Q installations took the top positions in all three lists: TOP500, Graph500 and Green500. Installations The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger. At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012 Green 500 list. A Blue Gene/Q system called Sequoia was delivered to the Lawrence Livermore National Laboratory (LLNL) beginning in 2011 and was fully deployed in June 2012. It is part of the Advanced Simulation and Computing Program running nuclear simulations and advanced scientific research. It consists of 96 racks (comprising 98,304 compute nodes with 1.6 million processor cores and 1.6 PB of memory) covering an area of about . In June 2012, the system was ranked as the world's fastest supercomputer. at 20.1 PFLOPS peak, 16.32 PFLOPS sustained (Linpack), drawing up to 7.9 megawatts of power. In June 2013, its performance is listed at 17.17 PFLOPS sustained (Linpack). A 10 PFLOPS (peak) Blue Gene/Q system called Mira was installed at Argonne National Laboratory in the Argonne Leadership Computing Facility in 2012. It consist of 48 racks (49,152 compute nodes), with 70 PB of disk storage (470 GB/s I/O bandwidth). JUQUEEN at the Forschungzentrum Jülich is a 28-rack Blue Gene/Q system, and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500. Vulcan at Lawrence Livermore National Laboratory (LLNL) is a 24-rack, 5 PFLOPS (peak), Blue Gene/Q system that was commissioned in 2012 and decommissioned in 2019. Vulcan served Lab-industry projects through Livermore's High Performance Computing (HPC) Innovation Center as well as academic collaborations in support of DOE/National Nuclear Security Administration (NNSA) missions. Fermi at the CINECA Supercomputing facility, Bologna, Italy, is a 10-rack, 2 PFLOPS (peak), Blue Gene/Q system. As part of DiRAC, the EPCC hosts a 6 rack (6144-node) Blue Gene/Q system at the University of Edinburgh A five rack Blue Gene/Q system with additional compute hardware called AMOS was installed at Rensselaer Polytechnic Institute in 2013. The system was rated at 1048.6 teraflops, the most powerful supercomputer at any private university, and third most powerful supercomputer among all universities in 2014. An 838 TFLOPS (peak) Blue Gene/Q system called Avoca was installed at the Victorian Life Sciences Computation Initiative in June, 2012. This system is part of a collaboration between IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases. The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM. A 209 TFLOPS (peak) Blue Gene/Q system was installed at the University of Rochester in July, 2012. This system is part of the Health Sciences Center for Computational Innovation, which is dedicated to the application of high-performance computing to research programs in the health sciences. The system consists of a single rack (1,024 compute nodes) with 400 TB of high-performance storage. A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue Gene/Q system called Lemanicus was installed at the EPFL in March 2013. This system belongs to the Center for Advanced Modeling Science CADMOS () which is a collaboration between the three main research institutions on the shore of the Lake Geneva in the French speaking part of Switzerland : University of Lausanne, University of Geneva and EPFL. The system consists of a single rack (1,024 compute nodes) with 2.1 PB of IBM GPFS-GSS storage. A half-rack Blue Gene/Q system, with about 100 TFLOPS (peak), called Cumulus was installed at A*STAR Computational Resource Centre, Singapore, at early 2011. Applications Record-breaking science applications have been run on the BG/Q, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run, while the Cardioid code, which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both on Sequoia. A fully compressible flow solver has also achieved 14.4 PFLOP/s (originally 11 PFLOP/s) on Sequoia, 72% of the machine's nominal peak performance. See also CNK operating system INK operating system Deep Blue (chess computer) References External links IBM Research: Blue Gene Next generation supercomputers - Blue Gene/P overview (pdf) Blue Gene Blue Gene Power microprocessors Parallel computing Transactional memory 32-bit computers 64-bit computers Lawrence Livermore National Laboratory
50047526
https://en.wikipedia.org/wiki/Nomadix
Nomadix
Nomadix is a Woodland Hills, CA-based developer of network gateway equipment (which includes access gateways and traffic optimizers), used by hotels and other businesses to deliver Internet access to end users. History Nomadix was founded in 1998 by UCLA Computer Science Professor Dr. Leonard Kleinrock, one of the founders of ARPANET, and a graduate student, Joel Short. The name Nomadix came from Kleinrock's studies of nomadic computing, which he described in a 2015 Barron's interview, "nomadic computing...refers to the capability that wherever I go, I should be able to connect seamlessly, and gain as much functionality and services as I was able to gain in my office, my home, my laboratory." Kleinrock served as the company's first CEO and Chairman, and Short served as Chief Technology Officer. The company's first product, the Nomadix Universal Subscriber Gateway, shipped in September 1999. The gateway was designed to allow visiting computers to connect to the Internet, without needing extra equipment or software on the computer. Built-in payment gateway features managed optional billing and payment functions. In February 2002, Nomadix announced a technology licensing deal for their Nomadix Service Engine (NSE) software with Agere Systems, now part of Avago Technologies, and at the time the second largest Wi-Fi vendor behind Cisco Systems. In March 2002, the company announced a customized version of their Universal Subscriber Gateway (USG), designed in a partnership with wireless networking company Boingo Wireless, to allow businesses to set up commercial Wi-Fi hot spots. In January 2004, the company was awarded the industry's first patent for redirecting a customer's computer to a sign-in page, also known as a "gateway" page. In July 2004, Nomadix was sued by Carlsbad, CA-based IP3 Networks, a wireless networking competitor, for trade libel, for allegedly telling customers that IP3 was stealing their technology. In February 2006, the case was dismissed. In December 2006, Nomadix was acquired by Singapore-based MagiNet, a provider of wireless hospitality solutions in the Asia pacific region. The company was to continue operating under the Nomadix name. In March 2007, Nomadix sued competitor Second Rule, which by then had acquired IP3's NetAccess gateway, for infringing on five of Nomadix's patents. In December 2007, it was announced that MagiNet was acquired by DOCOMO interTouch Pte. Ltd, a subsidiary of Japan's NTT DOCOMO, for $150M. In March 2009, a judge awarded Nomadix a $3.2M judgment in the Second Rule case, and granted a permanent injunction. In November 2009, the company filed patent infringement lawsuits against eight companies, including Hewlett Packard, Wayport, Inc., iBAHN, LodgeNet and Aruba Networks, seeking damages and injunctions over the use of eight of its patents. In June 2012, Nomadix launched the AG 5800 access gateway, designed for large venues. In November 2012, Hewlett Packard became the third and largest of the eight defendants in the 2009 patent lawsuit to settle, agreeing to pay licensing fees to continue to use Nomadix' patented technology. In March 2013, AT&T, now owner of Wayport and Superclick, another defendant, settled and agreed to pay licensing fees. In September 2013, Aruba Networks also settled and also agreed to pay licensing fees. In July 2013, Nomadix introduced the AG 2400, an Internet access gateway designed to support up to 200 simultaneous end users. In October 2014, Nomadix sued Norcross, Georgia-based Blueprint RF for patent infringement of its captive portal technology, based on U.S. Patent No. 8,156,246. In July 2015, Nomadix released the AG 5900 access gateway, which supported up to 8,000 simultaneous end users. In February 2016, the US District Court upheld Nomadix' patent claim against Blueprint RF. In March 2016, Nomadix announced an exclusive partnership to offer technology from WAN optimization vendor Exinda to the hospitality industry. The announcement included news of the first Nomadix products to come from the partnership, the Alloc8-X ("allocate-X") deep packet inspection (DPI) series, network monitoring devices for administrators of public networks. Products The company’s main products are wireless network access gateways, the Nomadix Service Engine (NSE) software that runs the hardware, and network traffic optimizers, TV Casting, Cloud PBX and a Mobile App. Access gateways Nomadix produces several access gateways which extend wired or wireless networks for public Internet access: the AG 2400 and the AG 5900. The AG 5900 is designed for larger venues and supports up to 8,000 devices at a time. The company also offers Load Balancing Module software, which manages network traffic across multiple connections. Nomadix Service Engine NSE software runs the Nomadix access gateways. The current version of the software as of April 2016 is 8.6. The software manages quality of service (QoS), with support for Layer 3 services, determining the best type of service to deliver based on the user's application. The software also allows for user authentication using multiple XML servers. Traffic optimizers The Nomadix Internet Traffic Optimizer (NITO) is hardware which allows operators of visitor networks such as hotspots or guest access networks to more effectively manage their bandwidth. Content filtering through deep packet inspection (DPI) is utilized to manage high bandwidth requirements. The Alloc8-X series of DPI devices provides network traffic monitoring and analytics features for administrators managing bandwidth consumption and traffic patterns on public networks. The devices provide bandwidth usage statistics by location, user, device and application. Four different models offer a range of bandwidth options based on the bandwidth needs of the facility. Patents As of 2016, Nomadix holds 76 US patents, and dozens of patents worldwide. Customers Nomadix sells through distributors to hospitality businesses with guest and visitor networks, such as the SLS hotel in Beverly Hills, businesses operating guest Wi-Fi networks, such as the Chicago Mercantile Exchange, event management companies hosting large crowds, such as the 2014 World Cup, apartment complexes and public spaces. References External links Networking companies of the United States Networking hardware companies Telecommunications equipment vendors Wireless networking Technology companies based in Greater Los Angeles Companies based in Agoura Hills, California American companies established in 1998 Software companies established in 1998 Telecommunications companies established in 1983 1983 establishments in California
41826266
https://en.wikipedia.org/wiki/Linux%20Userland%20Filesystem
Linux Userland Filesystem
The Linux Userland Filesystem (LUFS) is a File System on Linux, which is similar to FUSE in that a file system driver provides a bridge from kernel mode to user mode. With LUFS it is possible, among other things, to mount remote directories via SSH, FTP or Gnutella. As with FUSE, an LUFS filesystem can be developed with little effort comparable with a normal application. With the addition of FUSE in the Linux kernel LUFS has lost its relevance and is no longer being developed. Filesystems developed using LUFS can be used on FUSE with a translator layer called . External links Linux Userland File System on SourceForge (English) File systems supported by the Linux kernel
19174292
https://en.wikipedia.org/wiki/Durham%20University%20Department%20of%20Engineering
Durham University Department of Engineering
The Department of Engineering at Durham University is the department engaged in the teaching and research of Engineering across a broad range of disciplines. It was formed following the split of the School of Engineering & Computer Sciences in October 2017. The engineering research covers civil, mechanical, electrical and electronic engineering research groups cover topics within three distinct Research Challenge areas; Future Energy Systems, Next Generation Materials & Microsystems and Sustainable Infrastructure. Undergraduate courses The School offers four-year Master of Engineering programmes in both General Engineering and Computer Science. The School also offers three-year BSc in Computer Science and BEng in General Engineering. In all these programmes, the first two years are common to all students with the opportunity given for specialisation in later years, into civil, mechanical, aeronautics, electronic, and electrical engineering. Relevant undergraduate programmes are accredited by the Institution of Mechanical Engineers, the Institution of Engineering and Technology and, through the Joint Board of Moderators, the Institution of Civil Engineers and the Institution of Structural Engineers. This means that the degrees offered by the School are recognised as meeting the academic requirements for progression to Chartered Engineer status. Computer Science BSc programmes are accredited by the British Computer Society. In 2016, The Times Good University Guide ranked General Engineering 2/49 and ranked Computer Science 7/105. Notable people Sir Derman Christopherson Sir Gordon Higginson Michael Sterling See also Durham University Solar Car References External links Department website Department of Engineering University departments in England Engineering universities and colleges in the United Kingdom
20556201
https://en.wikipedia.org/wiki/Koobface
Koobface
Koobface is a network worm that attacks Microsoft Windows, Mac OS X, and Linux platforms. This worm originally targeted users of networking websites like Facebook, Skype, Yahoo Messenger, and email websites such as GMail, Yahoo Mail, and AOL Mail. It also targets other networking websites, such as MySpace, Twitter, and it can infect other devices on the same local network. Technical support scammers also fraudulently claim to their intended victims that they have a Koobface infection on their computer by using fake popups and using built-in Windows programs. Infection Koobface ultimately attempts, upon successful infection, to gather login information for FTP sites, Facebook, Skype, and other social media platforms, and any sensitive financial data as well. It then uses compromised computers to build a peer-to-peer botnet. A compromised computer contacts other compromised computers to receive commands in a peer-to-peer fashion. The botnet is used to install additional pay-per-install malware on the compromised computer and hijack search queries to display advertisements. Its peer-to-peer topology is also used to show fake messages to other users for the purpose of expanding the botnet. It was first detected in December 2008 and a more potent version appeared in March 2009. A study by the Information Warfare Monitor, a joint collaboration from SecDev Group and the Citizen Lab in the Munk School of Global Affairs at the University of Toronto, has revealed that the operators of this scheme have generated over $2 million in revenue from June 2009 to June 2010. Koobface originally spread by delivering Facebook messages to people who are "friends" of a Facebook user whose computer had already been infected. Upon receipt, the message directs the recipients to a third-party website (or another Koobface infected PC), where they are prompted to download what is purported to be an update of the Adobe Flash player. If they download and execute the file, Koobface can infect their system. It can then commandeer the computer's search engine use and direct it to contaminated websites. There can also be links to the third-party website on the Facebook wall of the friend the message came from sometimes having comments like LOL or YOUTUBE. If the link is opened the trojan virus will infect the computer and the PC will become a Zombie or Host Computer. Among the components downloaded by Koobface are a DNS filter program that blocks access to well known security websites and a proxy tool that enables the attackers to abuse the infected PC. At one time the Koobface gang also used Limbo, a password stealing program. Several variants of the worm have been identified: Worm:Win32/Koobface.gen!F Net-Worm.Win32.Koobface.a, which attacks MySpace Net-Worm.Win32.Koobface.b, which attacks Facebook WORM_KOOBFACE.DC, which attacks Twitter W32/Koobfa-Gen, which attacks Facebook, MySpace, hi5, Bebo, Friendster, myYearbook, Tagged, Netlog, Badoo and fubar W32.Koobface.D OSX/Koobface.A, a Mac version which spreads via social networks such as Facebook, MySpace and Twitter. In January 2012, the New York Times reported that Facebook was planning to share information about the Koobface gang, and name those it believed were responsible. Investigations by German researcher Jan Droemer and the University of Alabama at Birmingham's Center for Information Assurance and Joint Forensics Research were said to have helped uncover the identities of those responsible. Facebook finally revealed the names of the suspects behind the worm on January 17, 2012. They include Stanislav Avdeyko (leDed), Alexander Koltyshev (Floppy), Anton Korotchenko (KrotReal), Roman P. Koturbach (PoMuc), Svyatoslav E. Polichuck (PsViat and PsycoMan). They are based in St. Petersburg, Russia. The group is sometimes referred to as Ali Baba & 4 with Stanislav Avdeyko as the leader. The investigation also connected Avdeyko with CoolWebSearch spyware. Hoax warnings The Koobface threat is also the subject of many hoax warnings designed to trick social networking users into spreading misinformation across the Internet. Various anti-scam websites such as Snopes.com and ThatsNonsense.com have recorded many instances where alarmist messages designed to fool and panic Facebook users have begun to circulate prolifically using the widely publicized Koobface threat as bait. Other misconceptions have spread regarding the Koobface threat, including the false assertion that accepting "hackers" as Facebook friends will infect a victim's computer with Koobface, or that Facebook applications are themselves Koobface threats. These claims are untrue. Other rumours assert that Koobface is much more dangerous than other examples of malware and has the ability to delete all of your computer files and "burn your hard disk." However, these rumours are inspired by earlier fake virus warning hoaxes and remain false. See also Computing Trojan horse Trojan.Win32.DNSChanger Facebook malware Malware analysis References External links The Koobface malware gang - exposed!, research by Jan Droemer and Dirk Kollberg. The Real Face of KOOBFACE, analysis by Trend Micro. Researchers Take Down Koobface Servers, Slashdot article. Computer worms Facebook Myspace Trojan horses Hacking in the 2000s
35028834
https://en.wikipedia.org/wiki/OpenRC
OpenRC
OpenRC is a dependency-based init system for Unix-like computer operating systems. It was created by Roy Marples, a NetBSD developer who was also active in the Gentoo project. It became more broadly adopted as an init system outside of Gentoo following the decision by some Linux distributions not to adopt systemd. OpenRC is the default init system of Gentoo, Alpine Linux, Hyperbola GNU/Linux-libre, Parabola GNU/Linux-libre, Artix Linux, Maemo Leste, TrueOS, Funtoo and other Unix-like systems, while some others such as Devuan offer it as an option. That means that the software packages and daemons of those systems/distributions support it, coming with or using the available scripts. Design OpenRC is made up of several modular components, the main ones being an init (optional), the core dependency management system and a daemon supervisor (optional). It is written in C and POSIX-compliant shell, making it usable on BSD and Linux systems. The core part of OpenRC handles dependency management and init script parsing. OpenRC works by scanning the runlevels, building a dependency graph, then starting the needed service scripts. It exits once the scripts have been started. By default, OpenRC uses a modified version of start-stop-daemon for daemon management. Init scripts share similarities with scripts used in sysvinit, but offer several features to simplify their creation. Scripts are assumed to have , and ; and the system uses variables already declared to create the default functions. The depend function is used to declare dependencies to other services that would be done with LSB headers in sysvinit. Configuration and mechanism are separated with configuration files in the conf.d directory and init files in the init.d directory. Openrc-init first appeared in version 0.25 as an optional replacement for . Several other inits are supported, including sysvinit and Busybox. Supervise-daemon first appeared in version 0.21 giving OpenRC supervision capabilities. It can be enabled in the init script for supervise-daemon to start and monitor a daemon. Several other daemon supervisors are supported, including runit and s6. Features Portable between Linux, FreeBSD, and NetBSD Parallel service startup (off by default) Dependency-based boot-up Process segregation through cgroups Per-service resource limits (ulimit) Separation of code and configuration (init.d / conf.d) Extensible startup scripts Stateful init scripts (is it started already?) Complex init scripts to start multiple components (Samba [smbd and nmbd], NFS [nfsd, portmap, etc.]) Automatic dependency calculation and service ordering Modular architecture and separation of optional components (cron, syslog) Expressive and flexible network handling (including VPN, bridges, etc.) Verbose debug mode References External links Software using the BSD license Unix process- and task-management-related software
18519350
https://en.wikipedia.org/wiki/Government%20College%20of%20Engineering%20%26%20Textile%20Technology%2C%20Berhampore
Government College of Engineering & Textile Technology, Berhampore
The Government College of Engineering & Textile Technology Berhampore (formerly known as the College of Textile Technology Berhampore) is a college of Maulana Abul Kalam Azad University of Technology in Berhampore, West Bengal, India. It is a residential and co-educational institute. Admission for undergraduate students is through the West Bengal Joint Entrance Examination. This college is selected for TEQIP (Technical Education Quality Improvement Programme), Phase II. History The college was established in the central jail premises at Berhampore on 19 July 1927 as the Government Silk Weaving and Dyeing Institute to train local artisans engaged in silk production and silk weaving in Murshidabad district under the administrative control of the Directorate of Industries, Government of West Bengal. The institute was moved to its present location at 4 Barrack Square (East) during 1932–33. The college was upgraded in 1940 and renamed Bengal Technological Institute with the introduction of a 3-year diploma course (licentiate) in Textile Technology and a 2-year artisan course. Given the general nature of the course, the college was renamed the Berhampore Textile Institute in 1950. In 1958 the college was upgraded to the level of a degree college with the modernisation of the syllabus, workshop and laboratories. A three-year B.Sc. (Tech.) degree course in Textile Technology affiliated to the University of Calcutta was introduced and the name changed again to College of Textile Technology. In 1963, a four-year B.Sc. (Tech.) degree course was started. The college was subsequently brought under the administrative control of the Directorate of Technical Education in 1972. In 1990, the college was transferred to the University of Kalyani. The B.Sc. (Tech.) the degree was changed to a B.Tech. in 1998 in line with AICTE's wish to bring parity with other engineering degrees in India. The college was brought under the academic control of the West Bengal University of Technology in 2001. In the same year, a new B. Tech. course in Computer Science and Engineering was introduced. In 2002, the college was once again renamed the Government College of Engineering & Textile Technology, Berhampore. In 2004, the Textile Technology department of the institute was accredited by the National Board of Accreditation (NBA). Later in the year 2010, two new departments, Mechanical Engineering and Electrical Engineering were introduced. Campus Government College of Engineering & Textile Technology, Berhampore is situated in Berhampore, in Murshidabad district of West Bengal. Organisation and administration Governance Recognised by the Department of Higher Education, it is under the direct administrative control of the Government of West Bengal. Departments Computer Science and Engineering This department was started in 2001 to fill the demand for well-trained human resources for the flourishing computer hardware and software industry. Apart from preparing the students with hard skills special efforts are imparted to develop soft skills like paper presentation, giving technical seminars, mock interviews and other such skills normally considered essential in the industry. Mechanical Engineering Since 2010, this department has offered a 4-year course of study leading to a B. Tech. Degree in Mechanical Engineering. Professors from engineering universities such as Jadavpur University and BESU are attached to the college as guest lecturers. Currently there are a CAD lab, a Mechatronics lab, a Measurement lab, a Fluid Mechanics lab, a Heat transfer lab, an IC engine workshop, and an Applied Mechanics lab. The department has a petrol engine from Maruti Turbines, and a mini-power plant and also a 3d printing machine is installed. Electrical Engineering Since 2010, this department has offered a 4-year course of study leading to a B. Tech. Degree in Electrical Engineering. There are circuit theory labs, state-of-the-art seminar halls, thermal power engineering labs, electrical machines lab, electronics lab, control system lab, etc. Textile Technology This department has been offering an undergraduate course (B.Tech in Textile Technology) since 1958. The department is accredited by the National Board of Accreditation (NBA). In the last eight years the department has received 40.00 lakh rupees for four projects from the All India Council or Technical Education under the MODROBS scheme. At present two MODROBS projects are in progress in the department. The National Conference on Emerging Trends in Textile, Fibre & Apparel Engineering was held at the college on 18–19 March 2006. Around 150 participants attended. Academics Affiliation and accreditations National Board of Accreditation All India Council for Technical Education [AICTE] University Grants Commission (India) [UGC], Government of India Ministry of Human Resource Department, Government of India Ministry of Higher Education Dept., Govt. of West Bengal Programs offered B Tech: Engineering – Computer Science Engineering – AICTE approved (Intake 47 [1st year including 2 TFW Seats + 1 Defence Quota] + 09 [lateral]) Engineering – Electrical Engineering – AICTE approved (Intake 32 [1st year including 2 TFW seats] + 06 [lateral]) Engineering – Mechanical Engineering – AICTE approved (Intake 32 [1st year including 2 TFW seats] + 06 [lateral]) Engineering – Textile Technology – AICTE approved (Intake 35 [1st year including 2 TFW seats] + 06 [lateral]) M Tech: Mechanical Processing of Textiles – AICTE approved Admission Admission in the 1st semester to all UG courses are done by their ranking in West Bengal Joint Entrance Examination, through e-Counselling conducted by WBJEE, held every year. Twenty per cent of the total intake is filled-in from Lateral Entry candidates in the 3rd semester through JELET. M.Tech students are selected through the GATE exam. Training and placement Students of Textile Technology are trained and placed at several companies lincluding Vardhman Textiles Ltd, Mahabir Fabrics, Welspun Ind. Ltd, Amrit Exports Pvt. Ltd, Raymonds Ltd, Reid and Taylor, Nahar Fabrics, Arvind, Jayashree Textiles, Winsome, Trident (Abhishek group of Industries), Bombay rayon Fashions Ltd, Intertek Technical Services, BVCOS, SGS, TUV, Century, Sarla Fabrics, Alok Industries, Ashima Textiles, Soma Textiles, Mudra, Madura Life Style, DC Decore, and BSL Limited. Students of the CSE department are trained and placed at several companies including Globsyn, Tech Mahindra, CMC Ltd. IBM helpdesk, Oracle, Accenture, Infosys, and Capgemini. Students of the ME department are trained at several companies including Durgapur Projects Limited, Durgapur Steel Plant, Indian Railways, Bokaro Steel Plant Students of the EE department are trained at several companies including Indian Railways, Durgapur Projects Limited, Durgapur Steel Plant, NTPC units, WBSEDCL plants, CESC, and WBSETCL. Library The college library offers science and technological books and journals. The total number of books is 12,600 (Textile Technology books: 3,711; Computer Science and Engineering books: 1,800; Other books: 7,079). A large number of books for Mechanical and Electrical Engineering students were introduced recently. Student life Hostel The college offers a four-storied hostel facility with an intake of 100 students with food and lodging for the boys only. The hostel is having a playground, gymnasium, library and indoor games facilities. The hostel common room is provided with a TV and current newspapers and magazines. Facilities for indoor and outdoor games are also provided in the hostel. The administration of the hostel is supervised by the hostel superintendent and one hostel monitor from the hostel boarders. The hostel and the mess are run by a hostel committee consisting of hostel superintendent, staff member & students. Due to limitation of seats, hostel accommodation is provided to students from their 2nd year of degree, strictly by the first-cum-first-serve basis as well as considering the distance between the home and the institute. However, this is subject to change from time to time. Students' Association The Students' Association, which is apolitical in nature, provides an opportunity to the students for training in organising the various extra-curricular activities, such as sports, Fest, and the teacher-student match. All students of the college, irrespective of the department, are members. It has its own constitution and functions through an executive council and sub-committees. Members are elected by indirect election. Students are united by the KGEC Students Union. Cultural activities include ALFRESCO (Annual Cultural Cum Techo-Management Fiesta), INIZIO (Freshers' Welcome), Farewell of final year students, REMINISCENCE (Alumni Meet/Reunion), Blood Donation Camp, Cloth Distribution and so on. The Students' Union published their first annual Magazine, 'Konika' in 2003. There is a library operated by the Students' Union. The Students' Union conducts the annual Inter-Department Tournament, which includes the annual cricket tournament, football tournament, badminton tournament. There are 11 portfolios in the Union - President, Vice President, General Secretary, Assistant General Secretary, Finance General Secretary, Cultural Secretary, Games & Common Room Secretary, Education Secretary, Orientation Secretary, Career Secretary, Alumni Liaison.< Achievements Technical Education Quality Improvement Programme (TEQIP) was envisaged as a long-term programme of about 10–12 years duration to be implemented in 2-3 phases for transformation of the Technical Education System with the World Bank assistance. The broad objectives of the Programme are to create an environment in which engineering institutions selected under the Programme can achieve their own targets for excellence and sustain the same with autonomy and accountability, support development. The TEQIP Phase - II is aimed to upscale and support ongoing efforts of Government of India (GOI) in improving quality of technical education and enhance existing capabilities of the Institutions to become dynamic, demand-driven, quality conscious, efficient, forward-looking and responsive. Festivals Saraswati Puja has been a keen event to be organised by GCETTB every year and its significance has been well acknowledged by the people of Berhampore. The event would mainly focus on two days of Saraswati Puja celebration and concluding it with idol immersion on the third day. The two days celebration will include, puja celebration, cultural programs by various institutions, college performance, college band with a lot of enthusiasm and fun. Freshers Alfresco (Techno-Cultural Fest) Reunion References External links Media gallery Textile schools in India Universities and colleges in Murshidabad district Colleges affiliated to West Bengal University of Technology Educational institutions established in 1927 1927 establishments in India Berhampore
11258796
https://en.wikipedia.org/wiki/Molekel
Molekel
Molekel is a free software multiplatform molecular visualization program. It was originally developed at the University of Geneva by Peter F. Flükiger in the 1990s for Silicon Graphics Computers. In 1998, Stefan Portmann took over responsibility and released Version 3.0. Version 4.0 was a nearly platform independent version. Further developments lead to version 4.3, before Stefan Portmann moved on and ceased to develop the codes. In 2006, the Swiss National Supercomputing Centre (CSCS) restarted the project and version 5.0 was released on 21 December 2006. Molekel uses VTK and Qwt and therefore as well Qt. Major features Visualization of residues (ribbon or schematic) Complete control over the generation of molecular surfaces (bounding box and resolution) Visualization of the following surfaces: orbitals Isosurface from electron density data Isosurface from Gaussian cube grid data Solvent-accessible surface (SAS) Solvent excluded surface (SES) Van der Waals radii Animation of molecular surfaces Export to PostScript or TIFF See also Gabedit List of molecular graphics systems Molden Molecular graphics Software for molecular mechanics modeling SAMSON List of free and open-source software packages References External links Molekel home page Chemistry software for Linux Free chemistry software Molecular modelling software Science software that uses Qt Software that uses VTK
23147102
https://en.wikipedia.org/wiki/HEAT%20LANrev
HEAT LANrev
HEAT LANrev (formerly Absolute Manage) is systems lifecycle management software used by system administrators to automate IT administration tasks. The product includes server and client ("agent") software that runs on Windows and macOS. History Vancouver-based Absolute acquired LANrev from Pole Position Software in December 2009, for US$12.1 million in cash and 500,000 shares of Absolute's common stock. LANrev was rebranded as Absolute Manage in February 2010. In July 2015, Absolute announced its intention to divest Absolute Manage. The sale of Absolute Manage to HEAT Software was completed on October 5, 2015, for US$11.0 million, and the product was renamed HEAT LANrev. School webcam controversy In the 2010 Robbins v. Lower Merion School District case, plaintiffs charged two suburban Philadelphia high schools secretly spied on students by surreptitiously and remotely activating webcams embedded in school-issued laptops the students were using at home, and therefore infringed on their privacy rights. The schools admitted to secretly snapping over 66,000 webshots and screenshots, including webcam shots of students in their bedrooms. LANrev software was used in the Lower Merion school district's student laptop program, overseen by network technician Michael Perbix. In February 2010, Perbix and other administrators in the district were accused of using the software to take undisclosed and unauthorized photographs of students through the webcams on their Macintosh laptops. The lawsuit was brought by the parents of 15-year-old sophomore, Blake Robbins, who was allegedly accused of illicit behavior seen through his computer's webcam of him in his bedroom. The photographs, taken from a laptop that was not reported stolen, were then allegedly used as evidence in a disciplinary action. The FBI investigated the incident, and a Philadelphia federal judge intervened to sort out issues relating to the lawsuit. Perbix had previously praised Theft Track, the name of the feature that lets administrators remotely photograph potential thieves if a computer is reported stolen, noting in a YouTube video he produced that: It’s an excellent feature. Yes, we have used it, and yes, it has gleaned some results for us. But it, in and of itself, is just a fantastic feature for trying to—especially when you’re in a school environment and you have a lot of laptops and you’re worried about, you know, laptops getting up and missing. I’ve actually had some laptops we thought were stolen which actually were still in a classroom, because they were misplaced, and by the time we found out they were back, I had to turn the tracking off. And I had, you know, a good twenty snapshots of the teacher and students using the machines in the classroom. LANrev's new owner, Absolute Software denounced the use of their software for any illegal purpose, emphasizing that theft recovery should be left to law enforcement professionals. They further denied any knowledge of or complicity in either Perbix's or the school district's actions. Absolute stated that the next update of LANrev would permanently disable Theft Track. References External links Software distribution System administration Remote administration software Computer security
52434606
https://en.wikipedia.org/wiki/Mootaz%20Elnozahy
Mootaz Elnozahy
Mootaz Elnozahy is a computer scientist. He is currently a professor of computer science at the computer, electrical and mathematical science, and engineering (CEMSE) division at King Abdullah University of Science and Technology. He previously served as Special Advisor to the President and Dean of CEMSE. Elnozahy's research area is in systems, including high-performance computing, power-aware computing, fault tolerance, operating systems, system architecture, and distributed systems. His work on rollback-recovery is now a standard component of graduate courses in fault-tolerant computing, and he has made seminal contributions in checkpoint/restart, and in general on the complex hardware-software interactions in resilience. Early life and education Born Elmootazbellah Nabil Elnozahy ( المعتزبالله نبيل النزهي ) on March 21, 1962, in Cairo, Egypt, where he attended the Lycée Francais du Caire from 1966 to 1979. He obtained his B.S. in electrical engineering (1984), M.S. in computer engineering (1987) – both from Cairo University. He moved to Houston, Texas, to attend Rice University, where he earned his M.S. and Ph.D. in computer science (1990 and 1993). In 1993, he received the Ralph Budd Award for the best Ph.D. thesis in Engineering from Rice University. For 3 years, he was holder of IBM graduate fellowship while a graduate student at Rice. Mootaz won a Research Division Award at Thomas J. Watson Research Center(1992) for his contributions to the Highly Available Network File Server (HANFS) Project. Career In 1993, he joined Carnegie Mellon as an assistant professor of Computer Science until he accepted a staff research member position at IBM Research division in 1997. From 1994 to 1997, he was a visiting research scientist at Bell Communications Research (Bellcore) as well as a consultant at Bell Laboratories Research (Lucent Technologies). In 2005, he joined the Systems and Technology division at IBM as a senior technical staff, then, later, assumed a senior management position at IBM research in 2007 until 2012. In 2006, IBM awarded him the Master Inventor for life title, in recognition of his 56 U.S. Patents. Dr. Elnozahy collected various other IBM awards, including the Outstanding Invention Award, for innovative solutions in the Bureau of Census project in 2002, the President's Award in 2003, and the Outstanding Technical Achievement Award in 2008. In 1998, he accepted an adjunct assistant professor in the Department of Computer Science at the University of Texas at Austin and became an adjunct professor in the same department and university in 2012 before accepting his current position at King Abdullah University of Science and Technology. Selected publications Melhem R; Mosse D; Elnozahy E. "The Interplay of Power Management and Fault Recovery in Real-Time Systems", IEEE Transactions on Computers, vol. 53, no. 2, pp. 217–231, February 2004. Elnozahy, E.N., Speight, E., Li, J., Rajamony, R., Zhang, L., Arimilli, L.B. "PERCS System Architecture", Encyclopedia of Parallel Computing, Springer Verlag, pp. 1506–1515, 2011. Elnozahy EN; Plank JS. "Checkpointing for Peta-Scale Systems: A Look into the Future of Practical Rollback-Recovery", IEEE Transactions on Dependable and Secure Computing, vol. 1, no. 2, pp. 97—108, February 2004. Elnozahy EN; Alvisi L; Wang YM; et al. "A Survey of Rollback-Recovery Protocols in Message Passing Systems", ACM Computing Surveys, vol. 34, no. 3, September 2002. Elnozahy EN; Zwaenepoel W. "Manetho: Transparent Rollback-Recovery with Low Overhead, Limited Rollback and Fast Output Commit", IEEE Transactions on Computers, Special Issue on Fault-Tolerant Computing, 41(5): 526–531, May 1992. Awards 1995 – Career award from the National Science Foundation 2003 – Trailblazer Award from the University of Texas at Austin 2005 – Best Paper Award, 8th IEEE International Conference on Network Protocols 2010 – elected Fellow Member of the IEEE "For contributions to rollback-recovery, low-power computing, highly-available file systems, and reliable computing systems" 2011 – Best Paper Award, 2nd International Green Computing Conference 2013 – elected member, IFIP 10.4 Working Group References External links Mootaz Elnozahy faculty profile at kaust.edu.sa 1962 births Living people American computer scientists Egyptian computer scientists Egyptian electrical engineers Egyptian emigrants to the United States Fellow Members of the IEEE Cairo University alumni Rice University alumni Carnegie Mellon University faculty King Abdullah University of Science and Technology faculty University of Texas at Austin faculty People from Cairo
40503481
https://en.wikipedia.org/wiki/Frans%20Kaashoek
Frans Kaashoek
Marinus Frans (Frans) Kaashoek (born 1965, Leiden) is a Dutch computer scientist, entrepreneur, and Charles Piper Professor at the Massachusetts Institute of Technology. He was elected a member of the National Academy of Engineering (2006) for contributions to computer systems, distributed systems, and content-distribution networks. Biography Kaashoek received his MA in 1988 and his Ph.D degree in Computer Science in 1992 from the Vrije Universiteit under supervision of Andy Tanenbaum for the thesis "Group communication in distributed computer systems." In 1993 Kaashoek was appointed Charles Piper Professor in the Department of Electrical Engineering and Computer Science of the Massachusetts Institute of Technology. He is member of the MIT Computer Science and Artificial Intelligence Laboratory. Kaashoek was awarded among others the NSF national young investigator award in 1994 and the ACM-Infosys Foundation Award in 2010. In 2004 he was elected ACM Fellow, and in 2006 he became a member of the NAE. Work Kaashoek's research interest are in the fields of "computer systems: operating systems, networking, programming languages, compilers, and computer architecture for distributed, mobile, and parallel systems." In 1998 Kaashoek co-founded SightPath, a software company that developed software products for digital distribution. It was acquired by Cisco Systems in 2000. In the early 2000s Kaashoek assisted in the founding of Mazu Networks Inc. and served as board member until it was acquired by Riverbed Technology in 2009. Publications 1992. Group communication in distributed computer systems 2009. Principles of Computer System Design: An Introduction. With Jerome H. Saltzer. Articles M. Frans Kaashoek, Robbert van Renesse, Hans van Staveren and Andrew S. Tanenbaum (1993). FLIP: an internetwork protocol for supporting distributed systems ACM Transactions on Computer Systems 11:73–106. References External links M Frans Kaashoek at MIT Frans Kaashoek at CSAIL 1963 births Living people Dutch computer scientists Vrije Universiteit Amsterdam alumni MIT School of Engineering faculty People from Leiden Recipients of the ACM Prize in Computing
8254144
https://en.wikipedia.org/wiki/DeWayne%20Walker
DeWayne Walker
DeWayne Morris Walker (born December 3, 1960) is an American gridiron football coach and former player. He is currently the cornerbacks and nickels coach for the Arizona Wildcats football team. He previously served as the defensive backs coach of the Cleveland Browns of the National Football League (NFL). Walker played college football at the University of Minnesota and professionally in the Canadian Football League (CFL), with Edmonton Eskimos in 1982, and in the United States Football League (USFL), with the Oakland Invaders in 1984 and the Arizona Outlaws in 1985. Walker served as the head football coach at New Mexico State University from 2009 to 2012, compiling a record of 10–41 in four seasons. Early life and college Born in Los Angeles, Walker graduated from John Muir High School of Pasadena, California in 1978. Walker attended and played for Pasadena City College for two years before transferring to the University of Minnesota, where he was a two-year starter. In 1992, Walker completed his B.A. in liberal arts at Regents College (now Excelsior College). Professional playing career Walker played for the CFL's Edmonton Eskimos in 1982 and the USFL's Oakland Invaders in 1984 and Arizona Outlaws in 1985. Coaching career Walker was Pete Carroll's first hire once he became head coach of the USC Trojans in 2001; Walker oversaw the secondary and had spent the previous three seasons in a similar role with the New England Patriots. When Carroll hired Walker, he noted "He was with me [. . .] in the NFL, and he was able to learn my style and understand my system". Between USC and coaching the UCLA Bruins, Walker was a secondary coach for the National Football League Washington Redskins and New York Giants. He has also coached for the New England Patriots and at Mount San Antonio College, Utah State, BYU, Oklahoma State and California. UCLA Bruins In December 2006, his defense held the rival USC Trojans under 10 points for the first time since 2001. It helped the Bruins end a seven-game losing streak in the UCLA–USC rivalry. Following the dismissal of UCLA head coach Karl Dorrell in December 2007, Walker was asked to be the head coach of the Bruins in the 2007 Las Vegas Bowl. Dorrell was offered the opportunity to coach a final game, but declined. The Bruins faced Mountain West Conference champion Brigham Young University, a team they defeated earlier in the season. BYU beat UCLA 17–16 when BYU Cougars defensive lineman Eathyn Manumaleuna blocked a 28-yard field goal as time expired. This left Walker with a 0–1 record as head coach. After the Las Vegas bowl game, speculations arose on whether Walker would return to the program. Among others, he received an offer to replace Kent Baer as defensive coordinator at the University of Washington. On January 1, 2008, Sporting News columnist Tom Dienhart reported that Walker had accepted the offer. The story, however, turned out to be untrue, as Los Angeles Times reporter Chris Foster reported that Walker would stay at UCLA and remain defensive coordinator for the Bruins. The Sporting News later retracted the story. Walker interviewed for the UCLA Bruin head coaching position and was one of the leading candidates for the position along with Rick Neuheisel, Al Golden of Temple, and Norm Chow. The job eventually went to Neuheisel and Walker remained as defensive coordinator. New Mexico State On December 31, 2008, Walker was named head coach at New Mexico State University. Walker became the second African-American coach in school history and the seventh African-American FBS coach at that time. He resigned as head coach after the 2012 season. Jacksonville Jaguars On January 22, 2013, it was reported that Walker would leave New Mexico State in favor of the defensive backs job with the Jacksonville Jaguars. Head coaching record References 1960 births Living people African-American coaches of American football African-American players of American football African-American players of Canadian football American football cornerbacks Arizona Outlaws players BYU Cougars football coaches California Golden Bears football coaches Canadian football defensive backs Edmonton Elks players Jacksonville Jaguars coaches New England Patriots coaches New Mexico State Aggies football coaches New York Giants coaches Minnesota Golden Gophers football players Mt. SAC Mounties football coaches Oakland Invaders players Oklahoma State Cowboys football coaches Pasadena City Lancers football players UCLA Bruins football coaches USC Trojans football coaches Utah State Aggies football coaches Excelsior College alumni John Muir High School alumni Players of American football from Los Angeles Players of Canadian football from Los Angeles Sports coaches from Los Angeles
1078205
https://en.wikipedia.org/wiki/Central%20Authentication%20Service
Central Authentication Service
The Central Authentication Service (CAS) is a single sign-on protocol for the web. Its purpose is to permit a user to access multiple applications while providing their credentials (such as user ID and password) only once. It also allows web applications to authenticate users without gaining access to a user's security credentials, such as a password. The name CAS also refers to a software package that implements this protocol. Description The CAS protocol involves at least three parties: a client web browser, the web application requesting authentication, and the CAS server. It may also involve a back-end service, such as a database server, that does not have its own HTTP interface but communicates with a web application. When the client visits an application requiring authentication, the application redirects it to CAS. CAS validates the client's authenticity, usually by checking a username and password against a database (such as Kerberos, LDAP or Active Directory). If the authentication succeeds, CAS returns the client to the application, passing along a service ticket. The application then validates the ticket by contacting CAS over a secure connection and providing its own service identifier and the ticket. CAS then gives the application trusted information about whether a particular user has successfully authenticated. CAS allows multi-tier authentication via proxy address. A cooperating back-end service, like a database or mail server, can participate in CAS, validating the authenticity of users via information it receives from web applications. Thus, a webmail client and a webmail server can all implement CAS. History CAS was conceived and developed by Shawn Bayern of Yale University Technology and Planning. It was later maintained by Drew Mazurek at Yale. CAS 1.0 implemented single-sign-on. CAS 2.0 introduced multi-tier proxy authentication. Several other CAS distributions have been developed with new features. In December 2004, CAS became a project of the Java in Administration Special Interest Group (JASIG), which is as of 2008 responsible for its maintenance and development. Formerly called "Yale CAS", CAS is now also known as "Jasig CAS". In 2010, Jasig entered into talks with the Sakai Foundation to merge the two organizations. The two organizations were consolidated as Apereo Foundation in December 2012. In December 2006, the Andrew W. Mellon Foundation awarded Yale its First Annual Mellon Award for Technology Collaboration, in the amount of $50,000, for Yale's development of CAS. At the time of that award CAS was in use at "hundreds of university campuses (among other beneficiaries)". In April 2013, CAS Protocol specification 3.0 was released. Implementation Apereo CAS Implementation The Apereo CAS server that is the reference implementation of the CAS protocol today supports the following features: CAS v1, v2 and v3 Protocol SAML v1 and v2 Protocol OAuth Protocol OpenID & OpenID Connect Protocol WS-Federation Passive Requestor Protocol Authentication via JAAS, LDAP, RDBMS, X.509, Radius, SPNEGO, JWT, Remote, Trusted, BASIC, Apache Shiro, MongoDB, Pac4J and more. Delegated authentication to WS-FED, Facebook, Twitter, SAML IdP, OpenID, OpenID Connect, CAS and more. Authorization via ABAC, Time/Date, REST, Internet2's Grouper and more. HA clustered deployments via Hazelcast, Ehcache, JPA, Memcached, Apache Ignite, MongoDB, Redis, Couchbase and more. Application registration backed by JSON, LDAP, YAML, JPA, Couchbase, MongoDB and more. Multifactor authentication via Duo Security, SAASPASS, YubiKey, RSA, Google Authenticator (TOTP) and more. Administrative UIs to manage logging, monitoring, statistics, configuration, client registration and more. Global and per-application user interface theme and branding. Password management and password policy enforcement. Django Implementation Django CAS Server django-mama-cas: A Django Central Authentication Service (CAS) single sign-on server Django CAS Client django-cas-ng: Django CAS 1.0/2.0/3.0 client authentication library, support Django 2.0, 2.1, 2.2, 3.0 and Python 3.5+! See also CoSign single sign on JOSSO List of single sign-on implementations OpenAM OpenID Pubcookie SAML SAML-based products and services Shibboleth (Shibboleth Consortium) References External links Stanford WebAuth University of Minnesota CookieAuth Apereo CAS Project django-mama-cas django-cas-ng Java platform software Free security software Computer access control protocols Access control software
500991
https://en.wikipedia.org/wiki/Mac%20OS%20X%2010.0
Mac OS X 10.0
Mac OS X 10.0 (code named Cheetah) is the first major release and version of macOS, Apple's desktop and server operating system. Mac OS X 10.0 was released on March 24, 2001 for a price of US$129. It was the successor of the Mac OS X Public Beta and the predecessor of Mac OS X 10.1 (code named Puma). Mac OS X 10.0 was a radical departure from the classic Mac OS and was Apple's long-awaited answer for a next generation Macintosh operating system. It introduced a brand new code base completely separate from Mac OS 9's as well as all previous Apple operating systems, and had a new Unix-like core, Darwin, which features a new memory management system. Unlike subsequent releases starting with Mac OS X 10.2, Mac OS X 10.0 was not externally marketed with its codename. System requirements Supported Computers: Power Macintosh G3 Beige, G3 B&W, G4, G4 Cube, iMac, PowerBook G3, PowerBook G4, iBook RAM: 128 MB (unofficially 64 MB minimum) Hard Drive Space: 1,500 MB (800 MB for the minimal install) Features Dock — the Dock was a new way of organizing one's Mac OS X applications on a user interface, and a change from the classic method of Application launching in previous Mac OS systems. OSFMK 7.3 — the Open Software Foundation Mach kernel from the OSF was part of the XNU kernel for Mac OS X, and was one of the largest changes from a technical standpoint in Mac OS X. Terminal — the Terminal was a feature that allowed access to Mac OS X's underpinnings, namely the Unix core. Mac OS had previously had the distinction of being one of the few operating systems with no command line interface at all. Mail — email client. Address Book TextEdit — new on-board word processor, replacement to SimpleText. Full preemptive multitasking support, a long-awaited feature on the Mac. PDF Support (create PDFs from any application) Aqua UI — new user interface Built on Darwin, a Unix-like operating system. OpenGL AppleScript Support for Carbon and Cocoa APIs Sherlock — desktop and web search engine. Protected memory — memory protection so that if an application corrupts its memory, the memory of other applications will not be corrupted. Limitations File-sharing client — The system can only use TCP/IP, not AppleTalk, to connect to servers sharing the Apple Filing Protocol. It cannot use SMB to connect to Windows or Samba servers. File-sharing server — As a server, the system can share files using only the Apple Filing Protocol (over TCP/IP), HTTP, SSH, and FTP. Optical media — DVD playback is not supported, and CDs cannot be burned to. Multilingual snags Mac OS X 10.0 began a short era (that ended with Mac OS X 10.2 Jaguar's release) where Apple offered two types of installation CDs: 1Z and 2Z CDs. The difference in the two lay in the extent of multilingual support. Input method editors of Simplified Chinese, Traditional Chinese, and Korean were only included with the 2Z CDs. They also came with more languages (the full set of 15 languages), whereas the 1Z CDs came only with about eight languages and could not actually display simplified Chinese, traditional Chinese and/or Korean (except for the Chinese characters present in Japanese Kanji). A variant of 2Z CDs were introduced when Mac OS X v10.0.3 was released to the Asian market (this variant could not be upgraded to version 10.0.4). The brief period of multilingual confusion ended with the release of v10.2. Currently, all Mac OS X installer CDs and preinstallations include the full set of 15 languages and full multilingual compatibility. Release history References External links Mac OS X v10.0 review at Ars Technica from apple.com 0 2001 software PowerPC operating systems
937602
https://en.wikipedia.org/wiki/STUN
STUN
STUN (Session Traversal Utilities for NAT; originally Simple Traversal of User Datagram Protocol (UDP) through Network Address Translators) is a standardized set of methods, including a network protocol, for traversal of network address translator (NAT) gateways in applications of real-time voice, video, messaging, and other interactive communications. STUN is a tool used by other protocols, such as Interactive Connectivity Establishment (ICE), the Session Initiation Protocol (SIP), and WebRTC. It provides a tool for hosts to discover the presence of a network address translator, and to discover the mapped, usually public, Internet Protocol (IP) address and port number that the NAT has allocated for the application's User Datagram Protocol (UDP) flows to remote hosts. The protocol requires assistance from a third-party network server (STUN server) located on the opposing (public) side of the NAT, usually the public Internet. STUN was first announced in RFC 3489; the title was changed in a specification of an updated set of methods published as RFC 5389, retaining the same acronym. History STUN was first announced in RFC 3489. The original specification specified an algorithm to characterize NAT behavior according to the address and port mapping behavior. This algorithm is not reliably successful and only applicable to a subset of NAT devices deployed. The algorithm consists of a series of tests to be performed by an application. When the path through the diagram ends in a red box, UDP communication is not possible and when the path ends in a yellow or green box, communication is possible. The methods of RFC 3489 proved too unreliable to cope with the plethora of different NAT implementations and application scenarios encountered in production networks. The STUN protocol and method were updated in RFC 5389, retaining many of the original specifications as a subset of methods, but removing others. The title was changed in a specification of an updated set of methods published as RFC 5389, retaining the same acronym. Design STUN is a tool for communications protocols to detect and traverse network address translators that are located in the path between two endpoints of communication. It is implemented as a light-weight client–server protocol, requiring only simple query and response components with a third-party server located on the common, easily accessible network, typically the Internet. The client side is implemented in the user's communications application, such as a Voice over Internet Protocol (VoIP) phone or an instant messaging client. The basic protocol operates essentially as follows: The client, typically operating inside a private network, sends a binding request to a STUN server on the public Internet. The STUN server responds with a success response that contains the IP address and port number of the client, as observed from the server's perspective. The result is obfuscated through exclusive or (XOR) mapping to avoid translation of the packet content by application layer gateways (ALGs) that perform deep packet inspection in an attempt to perform alternate NAT traversal methods. STUN messages are sent in User Datagram Protocol (UDP) packets. Since UDP does not provide reliable transport, reliability is achieved by application-controlled retransmissions of the STUN requests. STUN servers do not implement any reliability mechanism for their responses. When reliability is mandatory, the Transmission Control Protocol (TCP) may be used, but induces extra networking overhead. In security-sensitive applications, STUN may be transported and encrypted by Transport Layer Security (TLS). An application may automatically determine a suitable STUN server for communications with a particular peer by querying the Domain Name System (DNS) for the stun (for UDP) or stuns (for TCP/TLS) server (SRV) resource record, e.g., _stun._udp.example.com. The standard listening port number for a STUN server is 3478 for UDP and TCP, and 5349 for TLS. Alternatively, TLS may also be run on the TCP port if the server implementation can de-multiplex TLS and STUN packets. In case no STUN server is found using DNS lookups, the standard recommends that the destination domain name should be queried for address records (A or AAAA), which would be used with the default port numbers. In addition to using protocol encryption with TLS, STUN also has built-in authentication and message-integrity mechanisms via specialized STUN packet types. When a client has evaluated its external address, it can use this as a candidate for communicating with peers by sharing the external NAT address rather than the private address, which is not reachable from peers on the public network. If both communicating peers are located in different private networks, each behind a NAT, the peers must coordinate to determine the best communication path between them. Some NAT behavior may restrict peer connectivity even when the public binding is known. The Interactive Connectivity Establishment (ICE) protocol provides a structured mechanism to determine the optimal communication path between two peers. Session Initiation Protocol (SIP) extensions are defined to enable the use of ICE when setting up a call between two hosts. Limitations Network address translation is implemented via a number of different address and port mapping schemes, none of which is standardized. STUN is not a self-contained NAT traversal solution applicable in all NAT deployment scenarios and does not work correctly with all of them. It is a tool among other methods and it is a tool for other protocols in dealing with NAT traversal, most notably Traversal Using Relay NAT (TURN) and Interactive Connectivity Establishment (ICE). STUN works with three types of NAT: full cone NAT, restricted cone NAT, and port restricted cone NAT. In the cases of restricted cone or port restricted cone NATs, the client must send out a packet to the endpoint before the NAT will allow packets from the endpoint through to the client. STUN does not work with symmetric NAT (also known as bi-directional NAT) which is often found in the networks of large companies. Since the IP address of the STUN server is different from that of the endpoint, in the symmetric NAT case, the NAT mapping will be different for the STUN server than for an endpoint. TURN offers better results with symmetric NAT. See also Port Control Protocol UDP hole punching Internet Gateway Device Protocol References External links STUNTMAN - Open source STUN server software What are Session Traversal Utilities for NAT (STUN) and Traversal Using Relays around NAT (TURN)? at callstats.io Session Traversal Utilities for NAT (STUN) at HCL Software Application layer protocols Network address translation Voice over IP
61892860
https://en.wikipedia.org/wiki/Volker%20Diehl
Volker Diehl
Volker Diehl (born 28 May 1957 in Neheim-Hüsten) is a German gallery owner. He mainly exhibits contemporary art in the gallery "DIEHL" (Berlin). Biography and career After graduating from high school in Warstein in 1977, Volker Diehl first studied at the Kunstakademie Münster under Hans-Jürgen Breuste, and from 1978 art history at the Free University of Berlin. In West Berlin, he supported various artists as part of the DAAD Artists-in-Berlin Program (including Markus Raetz, André Thomkins, Wolf Vostell, Dieter Hacker) and got to know René Block in this context. At the exhibition "Für Augen und Ohren" curated by Block, which was first shown at the Academy of Arts, Berlin, and then at the Musée d'Art Moderne de la Ville de Paris, he was also responsible for the support of artists and thus got to know Joseph Beuys, Nam June Paik, Joe Jones, and many other artists. A little later he became assistant to Shigeko Kubota and ran her studio. From 1981 to 1983 he was assistant to Christos M. Joachimides and Norman Rosenthal. In this context, he supported the artists and worked as personal assistant in the exhibition Zeitgeist (1982), which was "arguably one of the most historically significant global painting surveys of the 20th century". Together with Roland Hagenberg, he subsequently published the two books Maler in Berlin (1982) and the sequel ... Und (1983) in their own publishing house "HAPPY-HAPPY", which contained numerous interviews and portraits of artists and collectors, among them Anselm Kiefer, Georg Baselitz and Erich Marx as well as representatives of the art groups Neue Wilde and Arte Cifra. With Roland Hagenberg he traveled New York City, where they conducted interviews with Andy Warhol, Robert Mapplethorpe, Keith Haring, Francesco Clemente, Julian Schnabel, Robert Morris, Jean-Michel Basquiat, Leo Castelli, Ileana Sonnabend, Mary Boone, Tony Shafrazi, and many more. The tapes used turned out later to be defective, so the interviews were never published. Activities as gallery owner In 1983 he began to curate exhibitions at the "Galerie Folker Skulima" in Berlin as a junior partner and showed young, contemporary artists including Jaume Plensa, Rosemarie Trockel, Leiko Ikemura, Sergey Volkov, Ray Smith and Martin Assig. In September 1990 he took over the rooms at Niebuhrstr. 2 with the founding of "Galerie Volker Diehl". In 2000 he moved to new rooms at Zimmerstr. in Berlin-Mitte, in 2007 to Lindenstraße in the Kreuzberg district. In autumn 2011 the gallery moved back to the former space at Niebuhrstraße in Berlin-Charlottenburg. In September 2013, a project space was added under the name "Diehl Cube" in Emser Straße in Berlin-Wilmersdorf, in which exhibitions were shown until 2018. In addition, Diehl was the first western gallery owner to open its own exhibition space in Moscow under the name "Diehl + Gallery One" in April 2008. In the former premises of the state Soviet art trade at Smolenskaja No. 5/13, Diehl exhibited the works of the American artist Jenny Holzer under the title Like truth as the first project from April 17 to June 15, 2008. After other exhibitions by Wim Delvoye, Zhang Huan, Jaume Plensa and Olga Chernysheva, the Moscow branch closed again at the end of 2009. Under the name "Diehl Projects" Diehl was responsible for further projects, first around 2000 and 2007/2008 in Berlin, later for the exhibition of the Russian artist Olga Chernysheva Adventure Istiklal N. 9 in the "Yapi Kredi Kazim Taskent Art Gallery" in Istanbul (2009) and two group exhibitions in Rostov-on-Don with the titles Berlin tut gut! and Pubblico – Privato (spring 2012). Other projects In 1996 he and 13 other gallery owners were founding members of the art fair "art forum berlin" and, together with Rudolf Kicken, also managed the business of the company until 2001. It was internationally the first exclusively contemporary art fair and the first fair in the world to be conceived and conducted by gallery owners. With Margarita Pushkina and Vlad Ovcharenko, he established the Russian art fair "Cosmoscow" in 2010 with an "all-inclusive concept". In 2016, together with Elena Sereda and Natalia Chagoubatova, he also founded the London pop-up company "Art Circle" (curation by Bettina Ruhrberg, among others). Exhibitions (selection) 1983–1990 in Galerie Folker Skulima Georg Baselitz, 1983 Karl Horst Hödicke, 1983 Helmut Middendorf, 1984 Emil Schumacher, 1984 Christian Hasucha, 1984 Klaus Karl Mehrkens, 1985 Erwin Bohatsch, 1985 Gianni Dessi, 1985 Leiko Ikemura, 1985 Pizzi Cannella, 1985 Rosemarie Trockel, 1985/1986 Erwin Bohatsch, 1986 William Turnbull, 1987 Gotthard Graubner, 1988 Christian Ludwig Attersee, Errötende Tiere, 1989 Martin Assig, 1990 Gerhard Merz, 1990 Gerhard Richter, Grafik und Multiples 1966–1989, 1990 Galerie Volker Diehl in Berlin Markus Lüpertz, 1990 Martin Assig, 1990 Donald Judd, 1991 David Deutsch, 1991 Claudia Hart, 1992 Walter Dahn, A. R. Penck, Rosemarie Trockel, Collectiv No. 1, 1992/1993 Angela Dwyer, Neue Bilder, 1995 Christoph M. Gais, 1995 Leiko Ikemura, 1995 Group show: Magnus von Plessen, Fergus Bremner, 1996 Martin Assig, 1997 Jaume Plensa, Projekte auf Papier, 1997 Jaume Plensa, Wie ein Hauch, 1997 Magnus von Plessen, S.A.L.I.G.I.A., 1997 John Noel Smith, 1998 Matthias Müller, Film und Photographie, 2001 Alice Stepanek, Steven Maslin, Bodenlos, 2002 Jaume Plensa, Crown Fountain, 2002 Birgit Dieker, Kardio, 2002/2003 Zhang Huan, Photographie und Video, 2003 Frauke Eigen, Illusion Allusion, 2004/2005 The Blue Noses Group, (Slava Mizin und Sasha Shaburov from Novosibirsk) The Blue Noses, 2005/2006 Susan Hiller, Outlaws and Curiosities, 2006 Martin Borowski, Homestory, 2007 Hye Rim Lee, Crystal City, 2008 Alexei Wiktorowitsch Kallima, Closed Party, 2008/2009 Olga Chernysheva, Caesuras, 2009 Susan Hefuna, Hefuna / Hefuna, 2009 Thomas Florschuetz, Durchsicht, 2012 Sergey Bratkov, Chapiteau Moscow, 2013 Grazia Varisco, Se… 1959–2014, 2014 Turi Simeti. Alcamo, 2015 Tomás Maldonado, Werke/Opere 2000–2015, 2015 Alliance 22, Monochromia, 2016 Amélie Grözinger, Solid Matter 2.0, 2017 Simon English, I am not Justin Beiber, 2017/2018 KP Brehmer, Zweimal täglich Zähneputzen, 2018/2019 Diehl + Gallery One in Moscow Jenny Holzer, Like truth, 2008 Wim Delvoye, New Works, 2008 Group show: Laughterlife – New Art from Russia and Central Asia, 2008 Zhang Huan, Paintings and Sculptures, 2008/2009 Jaume Plensa, Silent Music, 2009 David Ter-Oganyan, Aleksandra Galkina, Scale, 2009 Group show: Glasnost. Soviet Non-Conformist Art from the 1980s, 2010 in cooperation with Baibakov Art Projects: Olga Chernysheva, Participation in Modernikon – Contemporary Art from Russia, 2011 Diehl Cube Constantin Flondor, Über einige Flächen mit gemeinsamen Ebenen, 2013 Gonn Mosny, Above the Line – Atmen und Malen, 2013/2014 Homage to Lucio Amelio, The Early Hacker 1960–70, 2014 Carla Guagliardi, Fuga, 2014 Mark Lammert, Floaters, 2015 James Lee Byars, The Secret Archive. Dieter Hacker Collection. Curated by Mark Gisbourne, 2014 Nanda Vigo, Zero in the Mirror, 2015 Ivan Gorshokov, The Way of King’s Pie, 2015 Anastasia Khoroshilova, Die Übrigen 2015 Alexander Rodchenko, Jump, 2017 Hartmut Böhm, Objects in Dialogue, 2017 Publications (selection) Christos M. Joachimides (Ed.): Zeitgeist. Edited by Ursula Prinz u. Volker Diehl. Berlin: Frölich & Kaufmann, 1982 (in German). Volker Diehl, Roland Hagenberg (Eds.): Maler in Berlin. Berlin: HAPPY-HAPPY, [1982] (in German). Volker Diehl, Roland Hagenberg (Eds.): ... Und. Berlin: HAPPY-HAPPY, [1983] (in German). Volker Diehl (Ed.): Martin Assig. Berlin: Galerie Volker Diehl, 1990 (in German). Andy Warhol: Flowers. New York, Berlin: Stellan Holm Gallery, Galerie Volker Diehl, 1994. Galerie Großinsky & Brümmer, Galerie Volker Diehl (Eds.): Martin Assig. Karlsruhe and Berlin: Galerie Großinsky & Brümmer and Galerie Volker Diehl, 1997 (in German. Jaume Plensa: Wie ein Hauch. Berlin: Galerie Volker Diehl, 1998 (in German). Susan Hiller: The curiosities of Sigmund Freud. Berlin: Galerie Volker Diehl, 2006. Zhang Huan: Drawings – On the Occasion of the Exhibition Zhang Huan – Drawings, at Galerie Volker Diehl, Berlin, March 10 to April 10, 2007. München: Schirmer/Mosel, 2007. Martin Borowski: Homestory Visitation. Berlin: Galerie Volker Diehl, 2007. Ling Jian: The Last Idealism. Berlin: Galerie Volker Diehl, 2007. Martin Assig: Westwerk Havelhaus. Berlin and München: Galerie Volker Diehl and Schirmer/Mosel, 2008 (in German). Olga Chernysheva, Boris Groys: Caesuras. Berlin: Galerie Volker Diehl, 2009. Joseph Backstein, Ekaterina Degot, Boris Groys: Glasnost – Soviet Non-Conformist Art from the 1980s. Haunch of Venision; London: Galerie Volker Diehl, 2010. Christian Megert: Licht und Bewegung. Berlin: Galerie Volker Diehl, 2013 (in German). Peter Sedgley: Singing Light. Berlin: Diehl, 2014. Rolf-Gunter Dienst: Primavera. Berlin: Volker Diehl, Allegra Ravizza, 2015, . Ralf Hanselle, Volker Diehl, Stefan Heyne: Prime Time – archetypes of abstraction in photography. Berlin: Galerie Volker Diehl, 2016, . Julia Nefedova, Lena Vazhenina: Internet doesn’t allow me to forget you. Berlin: Galerie Volker Diehl, 2016, . Simon English: My Big Self Decoy Justin Beiber. London: Black Dog Publishing, 2017. Tiberiy Szilvashi: Rembrandt-Zoom – Melancholie als Alchemie der Malerei. Published by Volker Diehl. Berlin: ciconia ciconia, 2019 (in German). Further reading Julika Kehb, Natascha Kirchner, Esther Knuth: Galerieprofil Volker Diehl – Härte und Brutalität. In: KUNST Magazin 1307/08, p. 12–17 (in German). Odrija Fišere: At the end, the artwork survives : An interview with German gallerist Volker Diehl. In: Arterritory.com, 12. August 2016. References External links Galerie Volker Diehl website Galerie Volker Diehl on ArtFacts.Net 1957 births German art dealers Living people Contemporary art galleries in Germany Art galleries established in 1990
39770
https://en.wikipedia.org/wiki/IBM%20System%20p
IBM System p
The IBM System p is a high-end line of RISC (Power)/UNIX-based servers. It was the successor of the RS/6000 line, and predecessor of the IBM Power Systems server series. History The previous RS/6000 line was originally a line of workstations and servers. The first System p server line was named the eServer pSeries in 2000 as part of the e-Server branding initiative. In 2004, with the advent of the POWER5 processor, the server family was rebranded the e''Server p5. In 2005, following IBM's move to streamline its server and storage brands worldwide, and incorporating the "System" brand with the Systems Agenda, the family was again renamed to System p5. The System p5 now encompassed the IBM OpenPower product line. In 2007, after the introduction of the POWER6 processor models, the last rename under the System p brand dropped the p (numbered) designation. In April 2008, IBM announced a rebranding of the System p and its unification with the mid-range System i platform. The resulting product line was called IBM Power Systems. Hardware and software Processors Whereas the previous RS/6000 line used a mix of early POWER and PowerPC processors, when pSeries came along, this had evolved into RS64-III and POWER3 across the board—POWER3 for its excellent floating-point performance and RS64 for its scalability, throughput, and integer performance. IBM developed the POWER4 processor to replace both POWER3 and the RS64 line in 2001. After that, the differences between throughput and number crunching-optimized systems no longer existed. Since then, System p machines evolved to use the POWER5 but also the PowerPC 970 for the low-end and blade systems. The last System p systems used the POWER6 processor, such as the POWER6-based System p 570 and the JS22 blade. In addition, during the SuperComputing 2007 (SC07) conference in Reno, IBM introduced a new POWER6-based System p 575 with 32 POWER6 cores at 4.7 GHz and up to 256 GB of RAM with water cooling. Features All IBM System p5 and IBM e''Server p5 machines support DLPAR (Dynamic Logical Partitioning) with Virtual I/O and Micro-partitioning. System p generally uses the AIX operating system and, more recently, 64-bit versions of the Linux operating system. Models BladeCenter IBM BladeCenter JS12 (POWER6) IBM BladeCenter JS22 (POWER6) IBM BladeCenter JS23 (POWER6) IBM BladeCenter JS43 (POWER6) Main line eServer pSeries IBM eServer pSeries 610 (7028-6C1 & 6E1) IBM eServer pSeries 615 (7029-6C3, 7029-6E3) (1~2-core POWER4+ CPU) IBM eServer pSeries 620 (7025-F80, 6F0 & 6F1) (1~3 2-core RS64-IV CPUs) IBM eServer pSeries 630 (7028-6C4, 7028-6E4) (1 1-core POWER4 CPU or 1~2 2-core POWER4 CPUs) IBM eServer pSeries 640 (7026-B80) 1-4 POWER3-II CPUs IBM eServer pSeries 650 (7038-6M2) 2-8 POWER4 CPUs IBM eServer pSeries 655 (7039-651) 4-8 POWER4 CPUs IBM eServer pSeries 660 (7026-H80, 6H0, 6H1, M80 & 6M1) IBM eServer pSeries 670 (7040-671) 4-16 POWER4 CPUs IBM eServer pSeries 680 (7017 range) IBM eServer pSeries 690 (7040-681) 8-32 POWER4 CPUs The IBM p690 was, at the time of its release in late 2001, the flagship of IBM's high-end Unix servers during the POWER4 era of processors. It was built to run IBM AIX Unix, although it is possible to run a version of Linux minus some POWER4-specific features. It could support up to 32 (1.5, 1.7 or 1.9 GHz) POWER4+ processors and 1 TB of RAM, which weighs well over 1000 kg. It was used in a supercomputer at Forschungszentrum Jülich in 2004, and was discontinued in late 2005. eServer p5 Released in 2004. IBM eServer p5 510 Express (9111-510) (1~2-core 1.5GHz POWER5 CPU) IBM eServer p5 510 (9111-510) (1~2-core 1.65GHz POWER5 CPU) IBM eServer p5 520 Express (9111-520) (1~2-core 1.5GHz POWER5 CPU) IBM eServer p5 520 (9111-520) (2-core 1.65GHz POWER5 CPU) IBM eServer p5 550 Express (9113-550) (1~2 1~2-core 1.5GHz POWER5 CPUs) IBM eServer p5 550 (9113-550) (1~2 2-core 1.65GHz POWER5 CPUs) IBM eServer p5 570 Express (9117-570) (1~8 2-core 1.5GHz POWER5 CPUs) IBM eServer p5 570 (9117-570) (1~8 2-core 1.65GHz or 1.9GHz POWER5 CPUs) IBM eServer p5 590 (9119-590) (1~4 8-core 1.65GHz POWER5 MCMs) IBM eServer p5 595 (9119-595) (2, 4, 6 or 8 8-core 1.65GHz or 1.9GHz POWER5 MCMs) System p5 IBM System p5 185 (7037-A50) (1~2-core PowerPC 970 CPU) IBM System p5 505 (9115-505) (1~2-core POWER5 or POWER5+ CPU) IBM System p5 505Q (9115-505) (4-core POWER5+ CPU) IBM System p5 510 (9110-51A) (1~2 1~2-core POWER5 or POWER5+ CPUs) IBM System p5 510Q (9110-51A) (1~2 4-core POWER5+ CPUs) IBM System p5 520 (9131-52A) (1~2-core POWER5+ CPU) IBM System p5 520Q (9131-52A) (4-core POWER5+ CPU) IBM System p5 550 (9133-55A) (1~2 2-core POWER5+ CPUs) IBM System p5 550Q (9133-55A) (1~2 4-core POWER5+ CPUs) IBM System p5 560Q (9116-561) (1, 2 or 4 4-core POWER5+ CPUs) IBM System p5 570 (9117-570) (1~8 2-core POWER5+ CPUs) IBM System p5 575 (9118-575) (8 1~2-core POWER5+ CPUs) IBM System p5 590 (9119-590) (1~2 16-core POWER5 or POWER5+ processor books) IBM System p5 595 (9119-595) (1~4 16-core POWER5 or POWER5+ processor books) System p IBM System p 520 Express (1, 2 or 4-core POWER6 CPU) IBM System p 550 Express (1~4 2-core POWER6 CPUs) IBM System p 560 Express (POWER6) IBM System p 570 (POWER6) IBM System p 575 (POWER6) IBM System p 595 (9119-FHA) (1~8 8-core POWER6 processor books) System p was rebranded to Power Systems in 2008. OpenPower OpenPower was the name of a range of servers in the System p line from IBM. They featured IBM's POWER5 CPUs and run only 64-bit versions of Linux. IBM's own UNIX variant, AIX is not supported since the OpenPower servers are not licensed for this operating system. There were two models available, with a variety of configurations. Before 2005, OpenPower belonged to the eServer product line but were eventually rolled into the IBM's Power Systems product portfolio. IBM eServer OpenPower 710 (9123-710) (1~2-core POWER5 CPU) IBM eServer OpenPower 720 (9124-720) 1-4 POWER5 CPUs IntelliStation POWER IBM IntelliStation POWER 265 IBM IntelliStation POWER 275 IBM IntelliStation POWER 185 (PowerPC 970) IBM IntelliStation POWER 285 BladeCenter IBM BladeCenter JS20 (PowerPC 970) IBM BladeCenter JS21 (PowerPC 970) See also Web-based System Manager, an AIX management software IBM Hardware Management Console, a management appliance Dynamic Logical Partitioning Linux on Power IBM IntelliStation POWER PureSystems List of IBM products References External links IBM Power Systems product page IBM's System Agenda Virtualizing an Infrastructure with System p and Linux System p
4543689
https://en.wikipedia.org/wiki/Apple%20Developer%20Tools
Apple Developer Tools
The Apple Developer Tools are a suite of software tools from Apple to aid in making software dynamic titles for the macOS and iOS platforms. The developer tools were formerly included on macOS install media, but are now exclusively distributed over the Internet. As of macOS 10.12, Xcode is available as a free download from the Mac App Store. Applications Applet Launcher A graphical interface for JavaScript's Plug-in, which aids developers by demonstrating how Java applets perform on macOS. Provides tools to adjust the performance, behavior and user experience in applets in development. Audio Unit Lab A graphic presentation of audio units helping software developers to examine their results in decibels. AU Lab can be used to test audio units, conduct live mixing, and playback of audio content. Audio units are controlled visually with the audio unit’s graphic interface and touch screen. Computer Hardware Understanding Development Tools A set of software tools, collectively Computer Hardware Understanding Development Tools (CHUD Tools) measure software performance on macOS, to aid in optimizing. Also provides hardware system benchmarks. Core Image Fun House Used in testing Core Image units, which function similar to Adobe Photoshop filters. Each has a specific action, with parameters customize the action. Showcases Core Image, a technology introduced in Mac OS X 10.4, supported by newer graphic hardware. CrashReporterPrefs A developer utility for setting report parameters for Apple's Crash Reporter application. Basic: Shows a dialog asking the user what to do. Developer: Provides additional debug info and automatically shows the mail to Apple window. Server: Runs silent, but keeps all the logs. None: Disables the dialog prompt. Crash reports are neither displayed nor logged. FileMerge A staple of macOS's developer tools since the days of NeXTSTEP, FileMerge graphically compares two or more versions of a file. True to its name, FileMerge allows the user to easily merge the two or more versions into one file. The utility is often used to track changes to source code. macOS's command provides the ability to launch FileMerge from the command line. The parameter can be used for three-way merging. Help Indexer Creates an index file for the macOS built-in Help Viewer. icns Browser Views the resources for an .icns file, displaying the Mini, Small, Large, Huge, Thumbnail & Tile sizes in all color depths and masks. Icon Composer Icon Composer was an icon editor that does not have any editing features other than composing Apple Icon Image files and Windows ICO files. External editors must do all the image manipulation, then the results may be imported into the converter to create the finished icon. As of XCode 8.2, Icon Composer is no longer available in Additional Tools, as it cannot create high resolution icons. Apple recommends using the command-line utility iconutil, which ships with macOS. Instruments Instruments is a GUI for tracing framework DTrace from Sun's OpenSolaris. It is used to profile time usage, memory allocations, system activity, call trace analysis, GPU performance analysis, energy logging (on iOS devices) etc. Jar Bundler Java tool that aids in packaging an application’s component files into a single double-clickable application. Properties can be modified to optimize the code. MallocDebug Assistance for assessing memory usage and detecting memory leaks in programs. Assesses an application's memory usage by monitoring a user as they interact with an application, which allows MallocDebug to build a memory profile that unfortunately is limited in size. OpenGL Driver Monitor Real time access to the inner workings of the graphics processing unit. Runs locally or over a network using Bonjour which is less likely to interfere with the statistics it is gathering with the exception of some disk fragmentation devices. OpenGL Profiler This tool assists developers in debugging and optimizing OpenGL usage under macOS. Supported features: Launch or attach to an application Breakpoints and execution control Error detection including thread safety checks Scripts Buffer views Resource viewing/editing Statistics gathering OpenGL call traces with stack traces and timings OpenGL Shader Builder An integrated environment to develop and debug OpenGL GPU programs (Shaders) under macOS. Features supported by OpenGL Shader Builder: Realtime entry Preview window with shaders applied to a textured plane, sphere or teapot Example shaders Syntax checking Debugging and analysis of vertex / fragment programs Export to Xcode One notable feature is 'Export to Xcode'. A sample Xcode project is created with C source code to initialize OpenGL (using the GLUT library) and run the shader program. Note that this program is no longer recommended for editing GLSL shaders as "GLSLEditorSample," available as an example program, is generally regarded as superior. PackageMaker Creates application .pkg installer bundles for installing applications using the Installer application. Pixie A magnifying glass application for magnifying small sections of the computer's screen, centered around the mouse cursor, giving the user a detailed view of the screen, as well as the pixel coordinates of the mouse. Provides several levels of zoom, "locking" the image under the mouse for closer examination, and saves the magnified image one of several formats. Helps ensure visual elements are aligned precisely. Property List Editor Edits application preference plist files. As of Xcode 4, Property List Editor is no longer included as a separate application and all editing of plist files is done within Xcode. The last stand-alone version was version 5.3 in Xcode 3.2.6. Quartz Composer A visual programming language for processing and rendering data. Using OpenGL, Core Image, Core Video, and other technologies to build an API and serves as a simple visual programming paradigm. Quartz Composer is a core technology of the macOS. Quartz Composer creations work in any QuickTime-aware application (beginning with Mac OS X 10.4), from the Quartz Composer application, or embedded into Cocoa or Carbon applications. Quartz Composer has many similarities to Max/MSP although its primary usage is for graphical rather than audio processing. Offers the ability to construct interactive video compositions that react to audio or MIDI signals and can be played from any QuickTime aware application. Pierre-Olivier Latour originally developed the predecessor to Quartz Composer under the name PixelShox Studio. A resurgence in interest in Quartz Composer has come about, as the Facebook design team has been showcasing their utilization of the program to prototype interactions that they couldn't have otherwise depicted with flat mockups in Photoshop. Repeat After Me Optimizes the performance of the built-in text-to-speech software for macOS. Tests the operating system's phonemic translation engine, creates graphs of the generated tone, to visually adjust the intonation, and records samples for reference. Shark Shark is a profiler, used by software developers to optimize software programs on macOS. It samples software at set time intervals (or driven by hardware performance monitors events) taking snapshots of the stack, showing the functions which require more of the application’s resources. Includes tools to analyze the data produced by a sampling run. Since Mac OS X 10.7, it is not on the Apple site any more and was replaced by Instruments. Spin Control Spin Control is a performance tool used for monitoring hang activity in software programs. The program gets its name from the spinning pinwheel on macOS. Discontinued as of Xcode 4.2. Thread Viewer Thread Viewer is a performance tool which graphically displays activity across a range of threads. It provides color-coded time-line views of thread activity and can display backtraces of activity at specific points in time. It was merged in Instruments app, and can be accessed via "System Trace" instrument. Xcode Xcode is an integrated development environment (IDE) for macOS containing a suite of software development tools developed by Apple for developing software for macOS, iOS, iPadOS, watchOS, and tvOS. Xcode supports developing source code for the programming languages C, C++, Objective-C, Objective-C++, Java, AppleScript, Python, Ruby, ResEdit (Rez), and Swift, with a variety of programming models, including but not limited to Cocoa, Carbon, and Java. References External links – official site at Apple Inc. MacOS programming tools
51096462
https://en.wikipedia.org/wiki/GPD%20Win
GPD Win
GPD Win is a Windows-based handheld computer equipped with a keyboard and gaming controls. It is an x86-based device which runs a full version of Windows 10 Home. The device was envisioned primarily with video game console emulation and PC gaming in mind, but is capable of running any x86 Windows-based application that can run within the confines of the computer's technical specifications. First announced in October 2015, it was crowdfunded via Indiegogo and two other crowdfunding sites in Japan and China, and was released in October 2016. History GamePad Digital (GPD) is a technology company based in Shenzhen, China. Among other products, they have created several handheld video game consoles which run Android on ARM architecture. For instance, GPD XD. GPD Win was meant to be a way to play PC games, PC-based video game console emulators, and hypervisors (such as VMware and VirtualBox clients) on a handheld device. The appeal of the Win was intended to be, that an x86 Windows handheld PC console offers far more PC and emulator gaming support than other architectures and operating systems that are widely used on mobile devices (such as Linux or Android on ARM hardware, or proprietary systems). GPD widely touts this ability on the device's Indiegogo page, with video demonstrations. GamePad Digital first pitched the idea of GPD Win to the community in October 2015 as concept for market research, with further planning in November. By December, the physical design and hardware specifications were determined. By March 2016, initial prototypes were finished, debugged, and shipped to select sources. GPD started accepting pre-orders in June 2016 through several online retailers, including the Indiegogo page. Pre-order backers are offered the device for a discounted price of $330, with an estimated final retail price of $499, but settling on a price of $330 after release. The initial stated goal was $100,000. By August 2016, a small batch shipment to industry personnel were shipped, and by September, the pre-order promotional pricing ceased. GPD started shipping the final product by October 2016, with pre-order backers receiving their devices first. Software GPD Win runs Windows 10 Home. GPD stated that per an April 2014 Microsoft decision, Windows is free on all devices with screens smaller than 9 inches. However, devices shipped to backers have a Windows 10 product key to input on initial boot and setup of the device. Unlike most Windows smartphones, GPD Win is able to run any x86 Windows application that can also be run on PC laptops and desktops. As of April 2017, several patches are available for the Linux kernel that allow a mostly complete functionality of the Win with a full desktop Linux like Ubuntu. There are also ways to get Android to work on GPD Win. Technical and physical specifications GPD Win has a full QWERTY keyboard, which includes 67 standard keys and 10 expanded function keys. For gaming, the controller is stylized similar to the OpenPandora and DragonBox Pyra style keyboard and controller layout: one D-pad, two analog sticks, four face buttons, and four shoulder buttons (two on each shoulder). GPD Win was initially intended to use the Intel Atom x5-Z8500 Cherry Trail CPU. The graphics processor is an Intel HD Graphics integrated GPU with a base clock speed of 200 MHz and a turbo boost of up to 600 MHz. GPD Win uses 4GB LPDDR3-1600 RAM, with 64GB ROM eMMC 4.51. It has a single microSD slot that can officially support a maximum of 128GB of storage. However, it can unofficially support a microSD card of 256GB. GPD Win is 15.5×9.7cm in size. It has a 5.5-inch 1280×720 (720p) H-IPS multi-directional touch screen in 16:9 ratio. It is reinforced by Gorilla Glass 3. The audio system consists of a built-in speaker using the Realtek ALC5645 driver, and a microphone jack. It supports most popular audio, video, and image formats, including MP3, MP4, 3GP, WMV/WMA FLAC, AVI, MOV, JPG, PNG, and BMP. GPD Win has a 6700mAh polymer lithium-ion battery with USB C charging interface (5 V/2.5 A). It has a stated ability to play 80 continuous hours of music or 6–8 hours of online video or online gaming. It is Bluetooth 4.0 and 802.11 b/g/n/ac (5 GHz and 2.4 GHz) Wifi capable. GamePad Digital has continued to support GPD Win past release, taking user feedback into account, with driver and hardware updates. As of January 10, 2017, GPD revised the Win's hardware, providing a fix for Intel graphics driver stability issues, fixing the AC charging/boot-up bug (described in the reviews section), improved cooling, as well as improving the tactile feedback of the D-Pad, buttons, and keyboard. This includes a software update that improves the buttons' responsiveness, and makes changes to the functionality of the built-in pointer. Release and reception GamePad Digital began shipping the GPD Win to backers in October 2016. JC Torres of Slashgear gave the Win a 7/10. Stating that it has solid technical specs per expected needs, it's ambitious for being a Windows 10-based handheld console in an industry dominated by Linux-based handhelds, and is well rounded with features. However, he also noted an inconsistent build quality among models, and mediocre sound quality ("loud, but low"). Ultimately, he called it an "exceptional device". Linus Sebastian made a video review of the GPD Win on his YouTube channel LinusTechTips. He complimented its gaming and multitasking capabilities, and was impressed with the hardware specs and hardware design and features overall (to include more I/O ports and features than for instance, a common MacBook). He did lament that the system had some flaws. Among them: The shoulder buttons felt cheaply assembled; the 5.5" 720P screen was not friendly for scaling, and that the device has a bug where it must not be plugged into the AC adapter when pressing the power button in order to boot up (otherwise, it simply loads to the charging screen. It must be plugged back in only after system boot up starts. This issue has since been fixed in subsequent releases). His official verdict was that deciding whether it was worth the price was up to the user, and that the Win made him excited about the prospect of what UMPCs will be capable of in the near future as the hardware progresses further. He compared it to Apple's first iPhone (while stating that it was not as revolutionary), in that it's a great concept that has some flaws with its execution, but is ambitious, practical, and is set to be much better in the future. GPD Win 2 GamePad Digital announced the GPD Win 2 In early 2017. The Win 2 is a significant upgrade which is able to run AAA spec games, as well as better video game console emulation. It has an Intel Core m3 CPU, Intel HD 615 graphics, 8GB LPDDR3 RAM, a 128GB M.2 solid-state drive, as well as the same I/O ports as the GPD Win. There are a few external hardware changes, including moving the analog knobs outward, D-input will be dropped, and an additional shoulder button on each shoulder, for six total. The price for crowdfunding backers is $649, with a tentative retail price of $899. The Indiegogo campaign launched on January 15, 2018, with a final release date of May 2018. The Indiegogo campaign saw rapid success, far surpassing its stated goal within days. See also Comparison of handheld game consoles Dragonbox Pyra GPD XD GPD Win Max GPD Win 3 Pandora (console) PC gaming Handheld gaming External links GPD Win homepage GPD Win Indiegogo page References Handheld personal computers Indiegogo projects Windows 10 Subnotebooks
10638
https://en.wikipedia.org/wiki/Free%20software%20movement
Free software movement
The free software movement is a social movement with the goal of obtaining and guaranteeing certain freedoms for software users, namely the freedoms to run the software, to study the software, to modify the software, and to share copies of the software (whether modified or not). Software which meets these requirements ("The Four Essential Freedoms of Free Software") is termed free software. Although drawing on traditions and philosophies among members of the 1970s hacker culture and academia, Richard Stallman formally founded the movement in 1983 by launching the GNU Project. Stallman later established the Free Software Foundation in 1985 to support the movement. Philosophy The philosophy of the movement is that the use of computers should not lead to people being prevented from cooperating with each other. In practice, this means rejecting proprietary software, which imposes such restrictions, and promoting free software, with the ultimate goal of liberating everyone in cyberspace – that is, every computer user. Stallman notes that this action will promote rather than hinder the progression of technology, since, "It means that much wasteful duplication of system programming effort will be avoided. This effort can go instead into advancing the state of the art." Members of the free software movement believe that all users of software should have the freedoms listed in The Free Software Definition. Many of them hold that: it is immoral to prohibit or prevent people from exercising these freedoms; these freedoms are required to create a decent society where software users can help each other; and they are necessary to have control over their computers. Some free software users and programmers do not believe that proprietary software is strictly immoral, citing an increased profitability in the business models available for proprietary software or technical features and convenience as their reasons. The Free Software Foundation also believes all software needs free documentation, in particular because conscientious programmers should be able to update manuals to reflect modification that they made to the software, but deems the freedom to modify less important for other types of written works. Within the free software movement, the FLOSS Manuals foundation specialises on the goal of providing such documentation. Members of the free software movement advocate that works which serve a practical purpose should also be free. Actions Writing and spreading free software The core work of the free software movement is focused on software development. The free software movement also rejects proprietary software, refusing to install software that does not give them the freedoms of free software. According to Stallman, "The only thing in the software field that is worse than an unauthorised copy of a proprietary program, is an authorised copy of the proprietary program because this does the same harm to its whole community of users, and in addition, usually the developer, the perpetrator of this evil, profits from it." Building awareness Some supporters of the free software movement take up public speaking, or host a stall at software-related conferences to raise awareness of software freedom. This is seen as important since people who receive free software, but who are not aware that it is free software, will later accept a non-free replacement or will add software that is not free software. Organisations Asia Free Software Movement of India International Centre for Free and Open Source Software (ICFOSS) Africa Free Software and Open Source Foundation for Africa North America Free Software Foundation Software Freedom Law Center South America Free Software Foundation Latin America Software Livre Brasil Europe Free Software Foundation Europe Framasoft Irish Free Software Organisation Australia Free Software Australia Legislation and government A lot of lobbying work has been done against software patents and expansions of copyright law. Other lobbying focuses directly on the use of free software by government agencies and government-funded projects. Asia India Government of India had issued Policy on Adoption of Open Source Software for Government of India in 2015 to drive uptake within the government. With the vision to transform India as a Software Product Nation, National Policy on Software Products-2019 was approved by the Government. North America United States In the United States, there have been efforts to pass legislation at the state level encouraging the use of free software by state government agencies. South America Peru Congressmen Edgar David Villanueva and Jacques Rodrich Ackerman have been instrumental in introducing free software in Peru, with bill 1609 on "Free Software in Public Administration". The incident invited the attention of Microsoft, Peru, whose general manager wrote a letter to Villanueva. His response received worldwide attention and is seen as a classic piece of argumentation favouring use of free software in governments. Uruguay Uruguay has a sanctioned law requiring that the state give priority to free software. It also requires that information be exchanged in open formats. Venezuela The Government of Venezuela implemented a free software law in January 2006. Decree No. 3,390 mandated all government agencies to migrate to free software over a two-year period. Europe Publiccode.eu is a campaign launched demanding a legislation requiring that publicly financed software developed for the public sector be made publicly available under a Free and Open Source Software licence. If it is public money, it should be public code as well. France The French Gendarmerie and the French National Assembly utilize the open source operating system Linux. United Kingdom Gov.uk keeps a list of "key components, tools and services that have gone into the construction of GOV.UK". Events Free Software events happening all around the world connects people to increase visibility for Free software projects and foster collaborations. Economics The free software movement has been extensively analyzed using economic methodologies, including perspectives from heterodox economics. Of particular interest to economists is the willingness of programmers in the free software movement to work, . In his 1998 article "The High-Tech Gift Economy", Richard Barbrook suggested that the then-nascent free software movement represented a return to the gift economy building on hobbyism and the absence of economic scarcity on the internet. Gabriella Coleman has emphasized the importance of accreditation, respect, and honour within the free software community as a form of compensation for contributions to projects, over and against financial motivations. The Swedish Marxian economist Johan Söderberg has argued that the free software movement represents a complete alternative to capitalism that may be expanded to create a post-work society. He argues that the combination of a manipulation of intellectual property law and private property to make goods available to the public and a thorough blend between labor and fun make the free software movement a communist economy. Subgroups and schisms Like many social movements, the free software movement has ongoing internal conflict between the many FLOSS organizations (FSF, OSI, Debian, Mozilla Foundation, Apache Foundation, etc.) and their personalities. For instance there is disagreement about the amount of compromises and pragmatism needed versus the need for strict adherence to values. Open source Although commercial free software was not uncommon at the time (see Cygnus Solutions for example), in 1998 after an announcement that Netscape would liberate their popular Web browser, a strategy session was held to develop a stronger business case for free software which would focus on technology rather than politics. After this, Eric Raymond and Bruce Perens founded the Open Source Initiative (OSI) to promote the term "open-source software" as an alternative term for free software. The OSI wanted to address the perceived shortcomings in the ambiguous term "free software". In addition, some members of the OSI didn't follow the free software movement's focus on non-free software as a social and ethical problem; instead focusing on the advantages of open source as superior model for software development. The latter became the view of people like Eric Raymond and Linus Torvalds, while Bruce Perens argues that open source was simply meant to popularize free software under a new brand, and even called for a return to the basic ethical principles. Some free software advocates use the terms "Free and Open-Source Software" (FOSS) or "Free/Libre and Open-Source Software" (FLOSS) as a form of inclusive compromise, drawing on both philosophies to bring both free software advocates and open-source software advocates together to work on projects with more cohesion. Some users believe that a compromise term encompassing both aspects is an ideal solution in order to promote both the user's freedom with the software and the pragmatic efficiency of an open-source development model. This eclectic view is reinforced by the fact that the overwhelming majority of OSI-approved licenses and self-avowed open-source programs are also compatible with the free software formalisms and vice versa. While some people prefer to link the two ideas of "open-source software" and "free software" together, they offer two separate ideas and values. This ambiguity began in 1998 when people started to use the term "open-source software" rather than "free software". People in the community of free software used these separate terms as a way to differentiate what they did. Richard Stallman has called open source "a non-movement", because it "does not campaign for anything". Open source addresses software being open as a practical question as opposed to an ethical dilemma. In other words, it focuses more on the development of software than the ethical, moral, political, and societal issues surrounding software in general. The open-source movement ultimately determines that non-free software is not the solution of best interest but nonetheless a solution. On the other hand, the free software movement views free software as a moral imperative: that proprietary software should be rejected for selfish and social reasons, and that only free software should be developed and taught to cope with the task of making computing technology beneficial to humanity. It is argued that whatever economical or technical merits free software may have, those are byproducts stemming from the rights that free software developers and users must enjoy. An example of this would be the unlikelihood of free software being designed to mistreat or spy on users. At the same time, the benefits purveyed by the open-source movement have been challenged both from inside and outside the free software movement. It is unclear whether free and open-source software actually leads to more performant and less vulnerable code, with researchers Robert Glass and Benjamin Mako Hill providing statistical insight that this is usually not the case. Regarding the meaning and misunderstandings of the word free, those who work within the free software camp have searched for less ambiguous terms and analogies like "free beer vs free speech" in efforts to convey the intended semantics, so that there is no confusion concerning the profitability of free software. The loan adjective libre has gained some traction in the English-speaking free software movement as unequivocally conveying the state of being in freedom that free software refers to. This is not considered schismatic; libre is seen as an alternative explanatory device. In fact, free software has always been unambiguously referred to as "libre software" (in translation) in languages where the word libre or a cognate is native. In India, where free software has gained a lot of ground, the unambiguous term swatantra and its variants are widely used instead of "free". The free software movement rebuts that while "free" may be prone to confuse novices because of the duplicity of meanings, at least one of the meanings is completely accurate, and that it is hard to get it wrong once the difference has been learned. It is also ironically noted that "open source" isn't exempt of poor semantics either, as a misunderstanding arises whereby people think source code disclosure is enough to meet the open-source criteria, when in fact it is not. The switch from the free software movement to the open-source movement has had negative effects on the progression of community, according to Christopher Kelty, who dedicates a scholarly chapter to the Free Software Movement in "Theorising Media and Practice". The open-source movement denies that selectivity and the privatization of software is unethical. Although the open-source movement is working towards the same social benefits as the free software movement, Kelty claims that by disregarding this fundamental belief of the free software advocates, one is destroying the overall argument. If it can be claimed that it is ethical to limit the internet and other technology to only users who have the means to use this software, then there is no argument against the way things are at the moment; there is no need to complain if all morality is in effect. Although the movements have separate values and goals, people in both the open-source community and free software community collaborate when it comes to practical projects. By 2005, Richard Glass considered the differences to be a "serious fracture" but "vitally important to those on both sides of the fracture" and "of little importance to anyone else studying the movement from a software engineering perspective" since they have had "little effect on the field". Stallman and Torvalds The two most prominent people associated with the movement, Richard Stallman and Linus Torvalds, may be seen as representatives of the value based versus apolitical philosophies, as well as the GNU versus Linux coding styles. In the GNU/Linux naming controversy the FSF argues for the term GNU/Linux because GNU is a longstanding project to develop a free operating system, of which they assert the kernel was the last missing piece. Criticism and controversy Principle compromises Eric Raymond criticises the speed at which the free software movement is progressing, suggesting that temporary compromises should be made for long-term gains. Raymond argues that this could raise awareness of the software and thus increase the free software movement's influence on relevant standards and legislation. Richard Stallman, on the other hand, sees the current level of compromise as a greater cause for worry. Programmer income Stallman said that this is where people get the misconception of "free": there is no wrong in programmers' requesting payment for a proposed project, or charging for copies of free software. Restricting and controlling the user's decisions on use is the actual violation of freedom. Stallman defends that in some cases, monetary incentive is not necessary for motivation since the pleasure in expressing creativity is a reward in itself. Conversely, Stallman admits that it is not easy to raise money for free software projects. "Viral" copyleft licensing The free software movement champions copyleft licensing schema (often pejoratively called "viral licenses"). In its strongest form, copyleft mandates that any works derived from copyleft-licensed software must also carry a copyleft license, so the license spreads from work to work like a computer virus might spread from machine to machine. Stallman has previously stated his opposition to describing the GNU GPL as "viral". These licensing terms can only be enforced through asserting copyrights. Critics of copyleft licensing challenge the idea that restricting modifications is in line with the free software movement's emphasis on various "freedoms", especially when alternatives like MIT, BSD, and Apache licenses are more permissive. Proponents enjoy the assurance that copylefted work cannot usually be incorporated into non-free software projects. They emphasize that copyleft licenses may not attach for all uses and that in any case, developers can simply choose not to use copyleft-licensed software. License proliferation and compatibility FLOSS license proliferation is a serious concern in the FLOSS domain due to increased complexity of license compatibility considerations which limits and complicates source code reuse between FLOSS projects. The OSI and the FSF maintain their own lists of dozens of existing and acceptable FLOSS licenses. There is an agreement among most that the creation of new licenses should be minimized and those created should be made compatible with the major existing FLOSS licenses. Therefore, there was a strong controversy around the update of the GNU GPLv2 to the GNU GPLv3 in 2007, as the updated license is not compatible with the previous version. Several projects (mostly of the open source faction like the Linux kernel) decided to not adopt the GPLv3 while almost all of the GNU project's packages adopted it. See also GNU Manifesto History of free software Linux adoption Open-source movement Free-culture movement Free Software Foundation Open Source Initiative Software Freedom Conservancy Free Software Movement of India Free Software Foundation of India Free Software Foundation Europe Free Software Movement Karnataka Free Software Foundation Tamil Nadu Swecha Gift economy References Further reading David M. Berry, Copy, Rip, Burn: The Politics of Copyleft and Open Source, Pluto Press, 2008, Johan Söderberg, Hacking Capitalism: The Free and Open Source Software Movement, Routledge, 2007, External links What is Free Software? - Essay by Karl Fogel. The Free Software Movement and the Future of Freedom, a 2006 lecture by Richard Stallman Free Software Movement intro by FSF The GNU Project Philosophy Directory, containing many defining documents of the free software movement An interview with Stallman, "Free Software as a social movement" Christian Imhorst, [http://www.imhorst.net/translations/anarchy-and-source-code/ Anarchy and Source Code - What does the Free Software Movement have to do with Anarchism?], (licence: GFDL), 2005 An anti-DRM campaign - by Bill Xu and Richard Stallman Stallman's Free Software Song Copyleft media Free software culture and documents
40330933
https://en.wikipedia.org/wiki/Daniel%20Thalmann
Daniel Thalmann
Prof. Daniel Thalmann is a Swiss and Canadian computer scientist and a pioneer in Virtual humans. He is currently Honorary Professor at EPFL, Switzerland and Director of Research Development at MIRALab Sarl in Geneva, Switzerland. Biography After a master's degree in Nuclear Physics (1970) and a combined Certificate in Statistics and Computer Science (1972) both from the University of Geneva, he earned a PhD in Computer Science (1977) also from the University of Geneva. In his PhD, he worked very early on the concept of abstract machines for portable compilers and operating systems. From 1977 to 1989, he was Professor at the University of Montreal, in Canada, where he started to work on computer graphics and animation. Then, he came back to Switzerland and founded the Virtual Reality Lab (VRlab) at EPFL, Switzerland. He has been Visiting Professor/ Researcher at CERN, University of Nebraska-Lincoln, University of Tokyo, and National University of Singapore. From 2009 to 2017, he was Visiting Professor at the Institute for Media Innovation, Nanyang Technological University, Singapore. He is co-editor-in-chief of the Journal of Computer Animation and Virtual Worlds, and member of the editorial board of six other journals. Thalmann has published more than 650 papers in graphics, animation, and virtual reality. He is coeditor of 30 books, and coauthor of several books including Crowd Simulation (second edition 2012) and Stepping Into Virtual Reality (2007), published by Springer. Research In the 1980s, Thalmann together with Nadia Magnenat Thalmann became interested in the realistic computer modelling and rendering of the human form, in motion. In 1988, they directed the short film Rendez-vous in Montreal, which is widely regarded as the first computer film to employ synthetic actors, in this case Humphrey Bogart and Marilyn Monroe. In the 1990s, Thalmann focused his research on behavioural animation of Virtual Humans, introducing the concept of synthetic vision for autonomous virtual humans, and developing methods for realistic gait modelling. In the late 1990s, he launched the first project on crowd simulation of virtual humans, initiating a new field of animation that now attracts many researchers. Rendering of tens of thousands of agents, collision detection and generation of varieties of individual people became important issues. He also introduced, with Marcelo Kallmann, the concept of smart objects as objects that describe their own possible interactions. He recently extended his research from virtual humans to social robots, working in the team on Nadine Social Robot. Thalmann is also recognized in the area of Virtual Rehabilitation, a term he coined with Professor Grigore Burdea of Rutgers University (US). He has also created with him the International Conference on Virtual Rehabilitation and is a founder of the International Society of Virtual Rehabilitation. Awards and honors Thalmann received an Honorary Doctorate (Honoris Causa) from Paul-Sabatier University in Toulouse, France, in 2003. He also received the Eurographics Distinguished Career Award in 2010 and the 2012 Canadian Human Computer Communications Society Achievement Award. In 2015, he received the CGI Career Achievement Award from the Computer Graphics Society (CGS). References External links List of publications at EPFL List of publications at Google Scholar Homepage and CV at EPFL Films/Demos Nadia Magnenat Thalmann, Daniel Thalmann, Rendez-vous a Montreal, 1987 Jonathan Maim, Barbara Maim, Daniel Thalmann, Crowd Simulation, 2006 Helena Grillon, Daniel Thalmann, Attention Crowds, 2007 EPFL VRLab demos, YouTube Channel Computer graphics researchers Canadian computer scientists Computer graphics professionals Swiss computer scientists Living people Place of birth missing (living people) University of Geneva alumni People associated with CERN 1946 births
11954666
https://en.wikipedia.org/wiki/Noakhali%20Science%20and%20Technology%20University
Noakhali Science and Technology University
Noakhali Science and Technology University () (known as NSTU) is a public university in the coastal terrain Noakhali of Bangladesh. It is the 27th public university (out of 32) and fifth science and technology university in Bangladesh. Its foundation stone was laid on 11 October 2003 and academic activities started on 22 June 2006. History Former Prime Minister Begum Khaleda Zia laid the foundation stone of Noakhali Science and Technology University on 11 October 2003. Earlier the Prime Minister Sheikh Hasina took necessary steps to establish this university. At 15 July 2001 her government passed a law in the parliament. Its construction work was formally inaugurated on 24 March 2005. Finally, it started its academic activities on 22 June 2006. It is the fifth of 12 such universities the government decided in 1997 to establish in the 12 erstwhile greater districts where there was no university. Campus Noakhali University of Science and Technology stands on Sonapur, 8 kilometers southwest of Maijdee. It has a land area of covering 93 Salla and 95 Noakhali Mouza. The goal of the university is to make it is one of the top tier universities in Bangladesh. Presently the campus consists of one 5 storied administrative building, three academic buildings (academic 1 is 5 storied and academic 2 and 3 are 10 storied), three male student halls, two female student halls, a 5 storied auditorium and TSC building, one 4 storied library building. The well furnished central library equipped with online library facilities has 10,000 printed books and 1500 printed journals apart from numerous e-books and e-journals. It has also four teacher's and officer's dormitories, one staff dormitories, vice chancellor's building and a mosque as well as a guest house. The construction of NSTU medical centre is also completed. There are two sport field and two big ponds in the campus area. It has a canteen for students called NSTU cafeteria. It has also a martyr monument like a fountain pen and a sculpture of liberation war. A park called Varsity park is also decorated for students leisure time with different kind of trees, benches and an octagonal cafeteria. Landscape Residence halls Vasha Sahid Abdus Salam Hall (ASH) Bangabandhu Sheikh Mujibur Rahman Hall (BSMRH) Abdul Malek Ukil Hall (MUH) Bibi Khadija Hall (BKH) Bangamata Sheikh Fazilatunesa Mujib Hall (BSFMH) List of vice chancellors Academic facilities The university has 25 departments under six faculties and two institutes. The faculties are: Faculty of Engineering & Technology Faculty of Science Faculty of Social Sciences and Humanities Faculty of Business Administration Faculty of Education Sciences Faculty of Law All above courses are undergraduate and postgraduate levels. The institutes are: Institute of Information Technology Institute of Information Sciences The institutes offer undergraduate and diploma level degrees. The 28 departments under six faculties: Faculty of Engineering & Technology Department of Computer Science and Telecommunication Engineering Department of Applied Chemistry and Chemical Engineering Department of Information and Communication Engineering Department of Electrical and Electronics Engineering Faculty of Science Department of Fisheries and Marine Science Department of Pharmacy Department of Microbiology Department of Applied Mathematics Department of Environmental Science and Disaster Management Department of Food Technology and Nutrition Science Department of Biotechnology and Genetic Engineering Department of Agriculture Department of Statistics Department of Oceanography Department of Biochemistry and Molecular Biology Faculty of Social Sciences and Humanities Department of English Department of Economics Department of Bangladesh and Liberation War Studies Department of Bangla Department of Sociology Department of Social Work Faculty of Business Administration Department of Business Administration Department of Tourism and Hospitality Management Department of Management Information Systems Faculty of Education Sciences Department of Education Department of Educational Administration Faculty of Law Department of Law Institute of Information Technology (IIT) Software Engineering (SE) Post Graduate Diploma in Information Technology (PGDIT) Institute of Information Sciences (IIS) Information Sciences and Library Management Ranking Bangladesh ranking In the local universities ranking, Noakhali Science and Technology University was placed as 14th overall and multi-disciplinary category out of 136 ranked universities (public and private) and also placed as 2nd science and technology university out of 09 in 2016 while in 2006. Web ranking In 2018, the NSTU was placed as 6271 into the top list of World Universities and placed as 621 into the top list of "South Asian Universities" in the ranking carried out by the web ranking. International ranking In 2011–2012, the Noakhali Science and Technology University made it into the list of "Top World Universities" in the ranking carried out by the ranking agency Times Higher Education and Quacquarelli Symonds, UK. Out of over 30,000 universities around the world, NSTU was placed as 11203. Co-curricular activities Coastal Environment Network Coastal Environment Network, also known as ("কোয়েন" in Bengali), is a departmental club from the department of Environmental Science and Disaster Management of the Noakhali Science and Technology University (NSTU) that is concerned with various environmental issues. It was established in 2014. After one year of establishment, it was officially inaugurated by M Wahiduzzaman, Vice Chancellor of the Noakhali Science and Technology University. The club has celebrated many environmentally days with the coordination of the Department of Environment including World Environment Day. It also organizes various environmental workshops, Olympiads, and programs concerning the recent environmental issues. The Coastal Environment Network (CoEN) facilitates networking among environmental organizations and others who share its mandate to protect the Earth and promote ecologically sound ways of life. The CoEN works directly with concerned citizens and organizations striving to protect, preserve, and restore the environment. Its motto is "Move Towards Sustainable Development". NSTU MODEL UNITED NATIONS ASSOCIATION NSTUMUNA regularly hosts MUN conferences focusing on diplomacy, leadership & negotiations. Other: Some other organisations such as NSTUDS, Royal Economics Club, SILSWA-NSTU, Dhropad, NSTU Adventure Club, Cholo Paltai etc. are working in the campus and beyond. References External links Department of Microbiology University Grants Commission of Bangladesh Public universities of Bangladesh Educational institutions established in 2005 Universities of science and technology in Bangladesh 2005 establishments in Bangladesh
23824
https://en.wikipedia.org/wiki/PostgreSQL
PostgreSQL
PostgreSQL (, ), also known as Postgres, is a free and open-source relational database management system (RDBMS) emphasizing extensibility and SQL compliance. It was originally named POSTGRES, referring to its origins as a successor to the Ingres database developed at the University of California, Berkeley. In 1996, the project was renamed to PostgreSQL to reflect its support for SQL. After a review in 2007, the development team decided to keep the name PostgreSQL and the alias Postgres. PostgreSQL features transactions with Atomicity, Consistency, Isolation, Durability (ACID) properties, automatically updatable views, materialized views, triggers, foreign keys, and stored procedures. It is designed to handle a range of workloads, from single machines to data warehouses or Web services with many concurrent users. It is the default database for macOS Server and is also available for Windows, Linux, FreeBSD, and OpenBSD. History PostgreSQL evolved from the Ingres project at the University of California, Berkeley. In 1982, the leader of the Ingres team, Michael Stonebraker, left Berkeley to make a proprietary version of Ingres. He returned to Berkeley in 1985, and began a post-Ingres project to address the problems with contemporary database systems that had become increasingly clear during the early 1980s. He won the Turing Award in 2014 for these and other projects, and techniques pioneered in them. The new project, POSTGRES, aimed to add the fewest features needed to completely support data types. These features included the ability to define types and to fully describe relationships something used widely, but maintained entirely by the user. In POSTGRES, the database understood relationships, and could retrieve information in related tables in a natural way using rules. POSTGRES used many of the ideas of Ingres, but not its code. Starting in 1986, published papers described the basis of the system, and a prototype version was shown at the 1988 ACM SIGMOD Conference. The team released version 1 to a small number of users in June 1989, followed by version 2 with a re-written rules system in June 1990. Version 3, released in 1991, again re-wrote the rules system, and added support for multiple storage managers and an improved query engine. By 1993, the number of users began to overwhelm the project with requests for support and features. After releasing version 4.2 on June 30, 1994 primarily a cleanup the project ended. Berkeley released POSTGRES under an MIT License variant, which enabled other developers to use the code for any use. At the time, POSTGRES used an Ingres-influenced POSTQUEL query language interpreter, which could be interactively used with a console application named monitor. In 1994, Berkeley graduate students Andrew Yu and Jolly Chen replaced the POSTQUEL query language interpreter with one for the SQL query language, creating Postgres95. The monitor console was also replaced by psql. Yu and Chen announced the first version (0.01) to beta testers on May 5, 1995. Version 1.0 of Postgres95 was announced on September 5, 1995, with a more liberal license that enabled the software to be freely modifiable. On July 8, 1996, Marc Fournier at Hub.org Networking Services provided the first non-university development server for the open-source development effort. With the participation of Bruce Momjian and Vadim B. Mikheev, work began to stabilize the code inherited from Berkeley. In 1996, the project was renamed to PostgreSQL to reflect its support for SQL. The online presence at the website PostgreSQL.org began on October 22, 1996. The first PostgreSQL release formed version 6.0 on January 29, 1997. Since then developers and volunteers around the world have maintained the software as The PostgreSQL Global Development Group. The project continues to make releases available under its free and open-source software PostgreSQL License. Code comes from contributions from proprietary vendors, support companies, and open-source programmers. Multiversion concurrency control (MVCC) PostgreSQL manages concurrency through multiversion concurrency control (MVCC), which gives each transaction a "snapshot" of the database, allowing changes to be made without affecting other transactions. This largely eliminates the need for read locks, and ensures the database maintains ACID principles. PostgreSQL offers three levels of transaction isolation: Read Committed, Repeatable Read and Serializable. Because PostgreSQL is immune to dirty reads, requesting a Read Uncommitted transaction isolation level provides read committed instead. PostgreSQL supports full serializability via the serializable snapshot isolation (SSI) method. Storage and replication Replication PostgreSQL includes built-in binary replication based on shipping the changes (write-ahead logs (WAL)) to replica nodes asynchronously, with the ability to run read-only queries against these replicated nodes. This allows splitting read traffic among multiple nodes efficiently. Earlier replication software that allowed similar read scaling normally relied on adding replication triggers to the master, increasing load. PostgreSQL includes built-in synchronous replication that ensures that, for each write transaction, the master waits until at least one replica node has written the data to its transaction log. Unlike other database systems, the durability of a transaction (whether it is asynchronous or synchronous) can be specified per-database, per-user, per-session or even per-transaction. This can be useful for workloads that do not require such guarantees, and may not be wanted for all data as it slows down performance due to the requirement of the confirmation of the transaction reaching the synchronous standby. Standby servers can be synchronous or asynchronous. Synchronous standby servers can be specified in the configuration which determines which servers are candidates for synchronous replication. The first in the list that is actively streaming will be used as the current synchronous server. When this fails, the system fails over to the next in line. Synchronous multi-master replication is not included in the PostgreSQL core. Postgres-XC which is based on PostgreSQL provides scalable synchronous multi-master replication. It is licensed under the same license as PostgreSQL. A related project is called Postgres-XL. Postgres-R is yet another fork. Bidirectional replication (BDR) is an asynchronous multi-master replication system for PostgreSQL. Tools such as repmgr make managing replication clusters easier. Several asynchronous trigger-based replication packages are available. These remain useful even after introduction of the expanded core abilities, for situations where binary replication of a full database cluster is inappropriate: Slony-I Londiste, part of SkyTools (developed by Skype) Bucardo multi-master replication (developed by Backcountry.com) SymmetricDS multi-master, multi-tier replication Indexes PostgreSQL includes built-in support for regular B-tree and hash table indexes, and four index access methods: generalized search trees (GiST), generalized inverted indexes (GIN), Space-Partitioned GiST (SP-GiST) and Block Range Indexes (BRIN). In addition, user-defined index methods can be created, although this is quite an involved process. Indexes in PostgreSQL also support the following features: Expression indexes can be created with an index of the result of an expression or function, instead of simply the value of a column. Partial indexes, which only index part of a table, can be created by adding a WHERE clause to the end of the CREATE INDEX statement. This allows a smaller index to be created. The planner is able to use multiple indexes together to satisfy complex queries, using temporary in-memory bitmap index operations (useful for data warehouse applications for joining a large fact table to smaller dimension tables such as those arranged in a star schema). k-nearest neighbors (k-NN) indexing (also referred to KNN-GiST) provides efficient searching of "closest values" to that specified, useful to finding similar words, or close objects or locations with geospatial data. This is achieved without exhaustive matching of values. Index-only scans often allow the system to fetch data from indexes without ever having to access the main table. Block Range Indexes (BRIN). Schemas In PostgreSQL, a schema holds all objects, except for roles and tablespaces. Schemas effectively act like namespaces, allowing objects of the same name to co-exist in the same database. By default, newly created databases have a schema called public, but any further schemas can be added, and the public schema isn't mandatory. A setting determines the order in which PostgreSQL checks schemas for unqualified objects (those without a prefixed schema). By default, it is set to ( refers to the currently connected database user). This default can be set on a database or role level, but as it is a session parameter, it can be freely changed (even multiple times) during a client session, affecting that session only. Non-existent schemas listed in search_path are silently skipped during objects lookup. New objects are created in whichever valid schema (one that presently exists) appears first in the search_path. Data types A wide variety of native data types are supported, including: Boolean Arbitrary-precision numerics Character (text, varchar, char) Binary Date/time (timestamp/time with/without time zone, date, interval) Money Enum Bit strings Text search type Composite HStore, an extension enabled key-value store within PostgreSQL Arrays (variable length and can be of any data type, including text and composite types) up to 1 GB in total storage size Geometric primitives IPv4 and IPv6 addresses Classless Inter-Domain Routing (CIDR) blocks and MAC addresses XML supporting XPath queries Universally unique identifier (UUID) JavaScript Object Notation (JSON), and a faster binary JSONB (not the same as BSON) In addition, users can create their own data types which can usually be made fully indexable via PostgreSQL's indexing infrastructures GiST, GIN, SP-GiST. Examples of these include the geographic information system (GIS) data types from the PostGIS project for PostgreSQL. There is also a data type called a domain, which is the same as any other data type but with optional constraints defined by the creator of that domain. This means any data entered into a column using the domain will have to conform to whichever constraints were defined as part of the domain. A data type that represents a range of data can be used which are called range types. These can be discrete ranges (e.g. all integer values 1 to 10) or continuous ranges (e.g., any time between and ). The built-in range types available include ranges of integers, big integers, decimal numbers, time stamps (with and without time zone) and dates. Custom range types can be created to make new types of ranges available, such as IP address ranges using the inet type as a base, or float ranges using the float data type as a base. Range types support inclusive and exclusive range boundaries using the / and / characters respectively. (e.g., represents all integers starting from and including 4 up to but not including 9.) Range types are also compatible with existing operators used to check for overlap, containment, right of etc. User-defined objects New types of almost all objects inside the database can be created, including: Casts Conversions Data types Data domains Functions, including aggregate functions and window functions Indexes including custom indexes for custom types Operators (existing ones can be overloaded) Procedural languages Inheritance Tables can be set to inherit their characteristics from a parent table. Data in child tables will appear to exist in the parent tables, unless data is selected from the parent table using the ONLY keyword, i.e. . Adding a column in the parent table will cause that column to appear in the child table. Inheritance can be used to implement table partitioning, using either triggers or rules to direct inserts to the parent table into the proper child tables. , this feature is not fully supported yet in particular, table constraints are not currently inheritable. All check constraints and not-null constraints on a parent table are automatically inherited by its children. Other types of constraints (unique, primary key, and foreign key constraints) are not inherited. Inheritance provides a way to map the features of generalization hierarchies depicted in entity relationship diagrams (ERDs) directly into the PostgreSQL database. Other storage features Referential integrity constraints including foreign key constraints, column constraints, and row checks Binary and textual large-object storage Tablespaces Per-column collation Online backup Point-in-time recovery, implemented using write-ahead logging In-place upgrades with pg_upgrade for less downtime Control and connectivity Foreign data wrappers PostgreSQL can link to other systems to retrieve data via foreign data wrappers (FDWs). These can take the form of any data source, such as a file system, another relational database management system (RDBMS), or a web service. This means that regular database queries can use these data sources like regular tables, and even join multiple data-sources together. Interfaces For connecting to applications, PostgreSQL includes the built-in interfaces libpq (the official C application interface) and ECPG (an embedded C system). Third-party libraries for connecting to PostgreSQL are available for many programming languages, including C++, Java, Julia, Python, Node.js, Go, and Rust. Procedural languages Procedural languages allow developers to extend the database with custom subroutines (functions), often called stored procedures. These functions can be used to build database triggers (functions invoked on modification of certain data) and custom data types and aggregate functions. Procedural languages can also be invoked without defining a function, using a DO command at SQL level. Languages are divided into two groups: Procedures written in safe languages are sandboxed and can be safely created and used by any user. Procedures written in unsafe languages can only be created by superusers, because they allow bypassing a database's security restrictions, but can also access sources external to the database. Some languages like Perl provide both safe and unsafe versions. PostgreSQL has built-in support for three procedural languages: Plain SQL (safe). Simpler SQL functions can get expanded inline into the calling (SQL) query, which saves function call overhead and allows the query optimizer to "see inside" the function. Procedural Language/PostgreSQL (PL/pgSQL) (safe), which resembles Oracle's Procedural Language for SQL (PL/SQL) procedural language and SQL/Persistent Stored Modules (SQL/PSM). C (unsafe), which allows loading one or more custom shared library into the database. Functions written in C offer the best performance, but bugs in code can crash and potentially corrupt the database. Most built-in functions are written in C. In addition, PostgreSQL allows procedural languages to be loaded into the database through extensions. Three language extensions are included with PostgreSQL to support Perl, Tcl, and Python. For Python, discontinued is used by default ( or ), even in PostgreSQL 14; is also supported by choosing the language ). External projects provide support for many other languages, including PL/Java, JavaScript (PL/V8), PL/Julia PL/R, PL/Ruby, and others. Triggers Triggers are events triggered by the action of SQL data manipulation language (DML) statements. For example, an INSERT statement might activate a trigger that checks if the values of the statement are valid. Most triggers are only activated by either INSERT or UPDATE statements. Triggers are fully supported and can be attached to tables. Triggers can be per-column and conditional, in that UPDATE triggers can target specific columns of a table, and triggers can be told to execute under a set of conditions as specified in the trigger's WHERE clause. Triggers can be attached to views by using the INSTEAD OF condition. Multiple triggers are fired in alphabetical order. In addition to calling functions written in the native PL/pgSQL, triggers can also invoke functions written in other languages like PL/Python or PL/Perl. Asynchronous notifications PostgreSQL provides an asynchronous messaging system that is accessed through the NOTIFY, LISTEN and UNLISTEN commands. A session can issue a NOTIFY command, along with the user-specified channel and an optional payload, to mark a particular event occurring. Other sessions are able to detect these events by issuing a LISTEN command, which can listen to a particular channel. This functionality can be used for a wide variety of purposes, such as letting other sessions know when a table has updated or for separate applications to detect when a particular action has been performed. Such a system prevents the need for continuous polling by applications to see if anything has yet changed, and reducing unnecessary overhead. Notifications are fully transactional, in that messages are not sent until the transaction they were sent from is committed. This eliminates the problem of messages being sent for an action being performed which is then rolled back. Many connectors for PostgreSQL provide support for this notification system (including libpq, JDBC, Npgsql, psycopg and node.js) so it can be used by external applications. PostgreSQL can act as an effective, persistent "pub/sub" server or job server by combining LISTEN with FOR UPDATE SKIP LOCKED. Rules Rules allow the "query tree" of an incoming query to be rewritten. "Query Re-Write Rules" are attached to a table/class and "Re-Write" the incoming DML (select, insert, update, and/or delete) into one or more queries that either replace the original DML statement or execute in addition to it. Query Re-Write occurs after DML statement parsing, but before query planning. Other querying features Transactions Full-text search Views Materialized views Updateable views Recursive views Inner, outer (full, left and right), and cross joins Sub-selects Correlated sub-queries Regular expressions common table expressions and writable common table expressions Encrypted connections via Transport Layer Security (TLS); current versions do not use vulnerable SSL, even with that configuration option Domains Savepoints Two-phase commit The Oversized-Attribute Storage Technique (TOAST) is used to transparently store large table attributes (such as big MIME attachments or XML messages) in a separate area, with automatic compression. Embedded SQL is implemented using preprocessor. SQL code is first written embedded into C code. Then code is run through ECPG preprocessor, which replaces SQL with calls to code library. Then code can be compiled using a C compiler. Embedding works also with C++ but it does not recognize all C++ constructs. Concurrency model PostgreSQL server is process-based (not threaded), and uses one operating system process per database session. Multiple sessions are automatically spread across all available CPUs by the operating system. Many types of queries can also be parallelized across multiple background worker processes, taking advantage of multiple CPUs or cores. Client applications can use threads and create multiple database connections from each thread. Security PostgreSQL manages its internal security on a per-role basis. A role is generally regarded to be a user (a role that can log in), or a group (a role of which other roles are members). Permissions can be granted or revoked on any object down to the column level, and can also allow/prevent the creation of new objects at the database, schema or table levels. PostgreSQL's SECURITY LABEL feature (extension to SQL standards), allows for additional security; with a bundled loadable module that supports label-based mandatory access control (MAC) based on Security-Enhanced Linux (SELinux) security policy. PostgreSQL natively supports a broad number of external authentication mechanisms, including: Password: either SCRAM-SHA-256 (since PostgreSQL 10), MD5 or plain-text Generic Security Services Application Program Interface (GSSAPI) Security Support Provider Interface (SSPI) Kerberos ident (maps O/S user-name as provided by an ident server to database user-name) Peer (maps local user name to database user name) Lightweight Directory Access Protocol (LDAP) Active Directory (AD) RADIUS Certificate Pluggable authentication module (PAM) The GSSAPI, SSPI, Kerberos, peer, ident and certificate methods can also use a specified "map" file that lists which users matched by that authentication system are allowed to connect as a specific database user. These methods are specified in the cluster's host-based authentication configuration file (pg_hba.conf), which determines what connections are allowed. This allows control over which user can connect to which database, where they can connect from (IP address, IP address range, domain socket), which authentication system will be enforced, and whether the connection must use Transport Layer Security (TLS). Standards compliance PostgreSQL claims high, but not complete, conformance with the latest SQL standard (for version 13 "in September 2020, PostgreSQL conforms to at least 170 of the 179 mandatory features for SQL:2016 Core conformance", and no other databases fully conformed to it). One exception is the handling of unquoted identifiers like table or column names. In PostgreSQL they are folded, internally, to lower case characters whereas the standard says that unquoted identifiers should be folded to upper case. Thus, should be equivalent to not according to the standard. Other shortcomings concern the absence of the MERGE command, or of temporal tables allowing automatic logging of row versions during transactions with the possibility of browsing in time (FOR SYSTEM TIME predicate). Benchmarks and performance Many informal performance studies of PostgreSQL have been done. Performance improvements aimed at improving scalability began heavily with version 8.1. Simple benchmarks between version 8.0 and version 8.4 showed that the latter was more than 10 times faster on read-only workloads and at least 7.5 times faster on both read and write workloads. The first industry-standard and peer-validated benchmark was completed in June 2007, using the Sun Java System Application Server (proprietary version of GlassFish) 9.0 Platform Edition, UltraSPARC T1-based Sun Fire server and PostgreSQL 8.2. This result of 778.14 SPECjAppServer2004 JOPS@Standard compares favourably with the 874 JOPS@Standard with Oracle 10 on an Itanium-based HP-UX system. In August 2007, Sun submitted an improved benchmark score of 813.73 SPECjAppServer2004 JOPS@Standard. With the system under test at a reduced price, the price/performance improved from $84.98/JOPS to $70.57/JOPS. The default configuration of PostgreSQL uses only a small amount of dedicated memory for performance-critical purposes such as caching database blocks and sorting. This limitation is primarily because older operating systems required kernel changes to allow allocating large blocks of shared memory. PostgreSQL.org provides advice on basic recommended performance practice in a wiki. In April 2012, Robert Haas of EnterpriseDB demonstrated PostgreSQL 9.2's linear CPU scalability using a server with 64 cores. Matloob Khushi performed benchmarking between PostgreSQL 9.0 and MySQL 5.6.15 for their ability to process genomic data. In his performance analysis he found that PostgreSQL extracts overlapping genomic regions eight times faster than MySQL using two datasets of 80,000 each forming random human DNA regions. Insertion and data uploads in PostgreSQL were also better, although general searching ability of both databases was almost equivalent. Platforms PostgreSQL is available for the following operating systems: Linux (all recent distributions), 64-bit x86 installers available and tested for macOS (OS X) version 10.6 and newer Windows (with installers available and tested for 64-bit Windows Server 2019 and 2016; some older PostgreSQL versions are tested back to Windows 2008 R2, while for PostgreSQL version 10 and older a 32-bit installer is available and tested down to 32-bit Windows 2008 R1; compilable by e.g. Visual Studio, version 2013 up to the most recent 2019 version) FreeBSD, OpenBSD, NetBSD, AIX, HP-UX, Solaris, and UnixWare; and not officially tested: DragonFly BSD, BSD/OS, IRIX, OpenIndiana, OpenSolaris, OpenServer, and Tru64 UNIX . Most other Unix-like systems could also work; most modern do support. PostgreSQL works on any of the following instruction set architectures: x86 and x86-64 on Windows XP (or later) and other operating systems; these are supported on other than Windows: IA-64 Itanium (external support for HP-UX), PowerPC, PowerPC 64, S/390, S/390x, SPARC, SPARC 64, ARMv8-A (64-bit) and older ARM (32-bit, including older such as ARMv6 in Raspberry Pi), MIPS, MIPSel, and PA-RISC. It was also known to work on some other platforms (while not been tested on for years, i.e. for latest versions). Database administration Open source front-ends and tools for administering PostgreSQL include: psql The primary front-end for PostgreSQL is the command-line program, which can be used to enter SQL queries directly, or execute them from a file. In addition, psql provides a number of meta-commands and various shell-like features to facilitate writing scripts and automating a wide variety of tasks; for example tab completion of object names and SQL syntax. pgAdmin The pgAdmin package is a free and open-source graphical user interface (GUI) administration tool for PostgreSQL, which is supported on many computer platforms. The program is available in more than a dozen languages. The first prototype, named pgManager, was written for PostgreSQL 6.3.2 from 1998, and rewritten and released as pgAdmin under the GNU General Public License (GPL) in later months. The second incarnation (named pgAdmin II) was a complete rewrite, first released on January 16, 2002. The third version, pgAdmin III, was originally released under the Artistic License and then released under the same license as PostgreSQL. Unlike prior versions that were written in Visual Basic, pgAdmin III is written in C++, using the wxWidgets framework allowing it to run on most common operating systems. The query tool includes a scripting language called pgScript for supporting admin and development tasks. In December 2014, Dave Page, the pgAdmin project founder and primary developer, announced that with the shift towards web-based models, work has begun on pgAdmin 4 with the aim to facilitate cloud deployments. In 2016, pgAdmin 4 was released. pgAdmin 4 backend was written in Python, using Flask and Qt framework. phpPgAdmin phpPgAdmin is a web-based administration tool for PostgreSQL written in PHP and based on the popular phpMyAdmin interface originally written for MySQL administration. PostgreSQL Studio PostgreSQL Studio allows users to perform essential PostgreSQL database development tasks from a web-based console. PostgreSQL Studio allows users to work with cloud databases without the need to open firewalls. TeamPostgreSQL AJAX/JavaScript-driven web interface for PostgreSQL. Allows browsing, maintaining and creating data and database objects via a web browser. The interface offers tabbed SQL editor with autocompletion, row editing widgets, click-through foreign key navigation between rows and tables, favorites management for commonly used scripts, among other features. Supports SSH for both the web interface and the database connections. Installers are available for Windows, Macintosh, and Linux, and a simple cross-platform archive that runs from a script. LibreOffice, OpenOffice.org LibreOffice and OpenOffice.org Base can be used as a front-end for PostgreSQL. pgBadger The pgBadger PostgreSQL log analyzer generates detailed reports from a PostgreSQL log file. pgDevOps pgDevOps is a suite of web tools to install & manage multiple PostgreSQL versions, extensions, and community components, develop SQL queries, monitor running databases and find performance problems. Adminer Adminer is a simple web-based administration tool for PostgreSQL and others, written in PHP. pgBackRest pgBackRest is a backup and restore tool for PostgreSQL that provides support for full, differential, and incremental backups. pgaudit pgaudit is a PostgreSQL extension that provides detailed session and/or object audit logging via the standard logging facility provided by PostgreSQL. wal-e Wal-e is a backup and restore tool for PostgreSQL that provides support for physical (WAL based) backups, written in Python A number of companies offer proprietary tools for PostgreSQL. They often consist of a universal core that is adapted for various specific database products. These tools mostly share the administration features with the open source tools but offer improvements in data modeling, importing, exporting or reporting. Notable users Notable organizations and products that use PostgreSQL as the primary database include: Microsoft, used for a petabyte-scale “Release Quality View” (RQV) analytics dashboard, which tracks quality of Windows updates analyzing 20K types of metrics from over 800M Windows devices. In 2009, the social-networking website Myspace used Aster Data Systems's nCluster database for data warehousing, which was built on unmodified PostgreSQL. Geni.com uses PostgreSQL for their main genealogy database. OpenStreetMap, a collaborative project to create a free editable map of the world. Afilias, domain registries for .org, .info and others. Sony Online multiplayer online games. BASF, shopping platform for their agribusiness portal. Reddit social news website. Skype VoIP application, central business databases. Sun xVM, Sun's virtualization and datacenter automation suite. MusicBrainz, open online music encyclopedia. The International Space Station – to collect telemetry data in orbit and replicate it to the ground. MyYearbook social-networking site. Instagram, a mobile photo-sharing service. Disqus, an online discussion and commenting service. TripAdvisor, travel-information website of mostly user-generated content. Yandex, a Russian internet company switched its Yandex.Mail service from Oracle to Postgres. Amazon Redshift, part of AWS, a columnar online analytical processing (OLAP) system based on ParAccel's Postgres modifications. National Oceanic and Atmospheric Administration's (NOAA) National Weather Service (NWS), Interactive Forecast Preparation System (IFPS), a system that integrates data from the NEXRAD weather radars, surface, and hydrology systems to build detailed localized forecast models. United Kingdom's national weather service, Met Office, has begun swapping Oracle for PostgreSQL in a strategy to deploy more open source technology. WhitePages.com had been using Oracle and MySQL, but when it came to moving its core directories in-house, it turned to PostgreSQL. Because WhitePages.com needs to combine large sets of data from multiple sources, PostgreSQL's ability to load and index data at high rates was a key to its decision to use PostgreSQL. FlightAware, a flight tracking website. Grofers, an online grocery delivery service. The Guardian migrated from MongoDB to PostgreSQL in 2018. YugabyteDB implements the PostgreSQL query layer as its default SQL mode Service implementations Some notable vendors offer PostgreSQL as software as a service: Heroku, a platform as a service provider, has supported PostgreSQL since the start in 2007. They offer value-add features like full database roll-back (ability to restore a database from any specified time), which is based on WAL-E, open-source software developed by Heroku. In January 2012, EnterpriseDB released a cloud version of both PostgreSQL and their own proprietary Postgres Plus Advanced Server with automated provisioning for failover, replication, load-balancing, and scaling. It runs on Amazon Web Services. Since 2015, Postgres Advanced Server has been offered as ApsaraDB for PPAS, a relational database as a service on Alibaba Cloud. VMware has offered vFabric Postgres (also termed vPostgres) for private clouds on VMware vSphere since May 2012. The company announced End of Availability (EOA) of the product in 2014. In November 2013, Amazon Web Services announced the addition of PostgreSQL to their Relational Database Service offering. In November 2016, Amazon Web Services announced the addition of PostgreSQL compatibility to their cloud-native Amazon Aurora managed database offering. In May 2017, Microsoft Azure announced Azure Databases for PostgreSQL In May 2019, Alibaba Cloud announced PolarDB for PostgreSQL. Jelastic Multicloud Platform as a Service provides container-based PostgreSQL support since 2011. They offer automated asynchronous master-slave replication of PostgreSQL available from marketplace. In June 2019, IBM Cloud announced IBM Cloud Hyper Protect DBaaS for PostgreSQL. In September 2020, Crunchy Data announced Crunchy Bridge. Release history See also Comparison of relational database management systems Database scalability List of databases using MVCC LLVM (llvmjit is the JIT engine used by PostgreSQL) SQL compliance References Further reading External links Client-server database management systems Cross-platform software Free database management systems Free software programmed in C ORDBMS software for Linux RDBMS software for Linux
2966080
https://en.wikipedia.org/wiki/CDC%203000%20series
CDC 3000 series
The CDC 3000 series ("thirty-six hundred" of "thirty-one hundred") computers from Control Data Corporation were mid-1960s follow-ons to the CDC 1604 and CDC 924 systems. Over time, a range of machines were produced - divided into the 48-bit upper 3000 series and the 24-bit lower 3000 series. Early in the 1970s CDC phased out production of the 3000 series, which had been the cash cows of Control Data during the 1960s; sales of these machines funded the company while the 6000 series was designed. Specifications Upper 3000 series The upper 3000 series used a 48-bit word size. The first 3000 machine to be produced was the CDC 3600; first delivered in June 1963. First deliveries of the CDC 3400 and CDC 3800 were in December 1965. These machines were designed for scientific computing applications; they were the upgrade path for users of the CDC 1604 machines. However these machines were overshadowed by the upcoming 60-bit CDC 6000 series machines when the CDC 6600 was introduced in December 1964 and delivered in 1965. Some high-end computer labs purchased these machines as stopgaps, while waiting for delivery of their 6600 machine. Lower 3000 series The lower 3000 series used a 24-bit word size. They were based on the earlier CDC 924 - a 24-bit version of the (48-bit) CDC 1604. The first lower 3000 to be released was the CDC 3200 (May 1964), followed by the smaller CDC 3100 (February 1965), and the CDC 3300 (December 1965). The final machine in the series, the CDC 3500, was released in March 1967 and used integrated circuits instead of discrete components. The 3300 and 3500 had optional relocation capabilities, floating point arithmetic, and BDP (Business + Data Processing) instructions. These machines were targeted towards business and commercial computing. The 3150 Control Data Corporation's CDC 3150 was described as a "batch computer," and it included a FORTRAN and a COBOL compiler. Its console looked like this. Instruction sets The instruction set of the upper 3000 series was composed mostly of 24-bit instructions (packed two per word), but also contained some 48-bit instructions. The lower 3000 was based on a 24-bit subset of those available on the upper 3000 systems. It was therefore possible to write programs which would run on all 3000 systems. And as these systems were based on the prior 1604 and 924 instruction sets, some backward compatibility also existed. However the systems did diverge from each other in areas such as relocation and the BDP instructions. Memory All 3000 series computers used magnetic-core memory. The CDC 3500 machine used the same core memory modules as used in the CDC 6000 / Cyber 70 series computers. Architecture The lower 3000 CPU was a 24-bit architecture: instructions were 24 bits in length, as were the two operand registers A and Q. There were four index registers of 15 bits, B0 through B3, though B0 is always zero (zero when read; writes do not affect the value). There was no status (flags or condition code) register. Up to 32,768 words of core memory, 24 bits per word, could be directly addressed, and multiple banks could be switched in. Two or three memory bank configurations were the most common. Each instruction contained six bits of opcode, one bit specifying whether indirect addressing used, two bits of index register address and fifteen bits of address. Arithmetic was ones' complement, so there were two forms of zero: positive zero and negative zero. The A and Q register could function as a combined 48-bit register for certain arithmetic instructions. The E register had 48 bits. The 3600 CPU could execute around one million instructions per second (1 MIPS), giving it supercomputer status in 1965. Much of the basic architecture design of the 3000 series was done by Seymour Cray, then passed on to others to complete as he moved on to designing the CDC 6000 series. Several of the innovative features that made the 6600 'the first supercomputer' can be seen in prototype in the 3000 series. Software The operating systems for the upper 3000 were called SCOPE (Supervisory Control Of Program Execution). Tape SCOPE was a serial batch OS with no buffering for card reading or print spooling. Drum SCOPE upgraded performance and featured print spooling. CDC developed a OS for the 3800 called SUMMIT (Simultaneous Usage of Multiprogramming, Multiprocessing with Interactive Timesharing) to take advantage of hardware's advanced features. Because CDC focused all its resources in advancing the 6600 system, SUMMIT was never released. The earliest operating system for the lower 3000 series was called RTS OS. However it was quickly replaced with MSOS (Mass Storage OS). The premier operating system for the CDC 3300 and CDC 3500 was called MASTER (Multi Access, Shared Time Executive Routine). MASTER was a multi-tasking, disk-based OS but still batch job oriented. Card jobs were cached to disk and printer output spooled. MASTER optimized memory usage with page-mapping hardware. An operating system called REAL-TIME SCOPE existed for both lower- and upper-3000 systems. A disk-based version of SCOPE was eventually made available for the upper-3000 systems. FORTRAN, COBOL, and ALGOL were available. The assembly language was called COMPASS. These were available from CDC. An APL system for the upper 3000 series computers was developed at the University of Massachusetts Amherst. This system was made available to other users of the upper 3000 series. Character set The operating systems for the CDC 3xxx typically used six-bit characters, so the 64 character set did not contain lowercase letters. The character set contained (in display code order): : A-Z 0-9 + - * / ( ) $ = (space) , . # [ ] % " _ ! & ' ? < > @ \ ^ ; This six-bit extension of the four-bit BCD encoding was referred to as BCDIC (BCD interchange code.) IBM later extended this code further to create the eight-bit "extended BCDIC", or EBCDIC code. Depending on the device, some characters are rendered differently, especially the lineprinter and some terminals: Note the absence of control characters, especially carriage return and line feed. These were encoded by the record structure. Partial list of users United States Oregon State University offered a CDC 3300 for use from the mid-to-late 1960s up until about 1980. It used a home grown operating system known as OS3 (for Oregon State Open Shop Operating System). The U.S. Internal Revenue Service used CDC 3000 series computers for many years. Northeastern University in Boston used a CDC 3300 in the early 1970s. International Timesharing Corporation (ITS) of Chaska, Minnesota sold timesharing services using CDC 3300s. (ITS later acquired another timesharing company that used CDC 3600s, before it, in turn, was acquired by United Computing Systems of Kansas City, Missouri.) California State University at Northridge had a dual 3170 that provided timesharing service to the California State University and College system. Many of the other campuses also had CDC 3150 machines for local batch operation. In 1970 CDC 3150s were installed at most campuses of the California State College system. San Jose State and LA State got CDC 3300s and served as regional data centers with (very flaky) data links to the other campuses. California State Polytechnic College (San Luis Obispo) and San Diego State University had IBM System/360s. The rest got 3150s. Typical configuration was 24K words of 24-bit core memory, four 7-track tape drives, drum printer, card reader, card punch, two 8MB disc drives (removable packs). At Humboldt State College where I worked this was used for both administrative and instructional purposes. It replaced an IBM 1620. The US Air Force used numerous CDC 3800 series systems in the Air Force Satellite Control Facility located at what is now Onizuka Air Force Station in Sunnyvale, California. As part of the Air Force Satellite Control Network, they were used to do orbital planning and maintenance calculations for defense satellites until they were phased out and replaced by IBM mainframes in the mid-1990s. These systems used the JOVIAL programming language to provide the accuracy necessary for these calculations. The Center for Naval Analyses had a CDC 3800 from about 1968 until about 1975. It was used for scientific computing in support of operations research for the U. S. Navy. Michigan State University's Department of Computer Science and Engineering had a CDC 3600 CDC's South West Region had the following accounts late 1960s and early 1970s: Long Beach Memorial Hospital, Long Beach, California (patient location tracking and billing) Many of the California state universities (Fullerton, Los Angeles, etc.) in their engineering departments California State College Los Angeles was also the California state southern region data center. (common admissions, etc.) California State College San Jose, the Northern California regional data center, had a CDC 3300, as did LA State Southern Regional data center. Jet Propulsion Lab (Pasadena, California) - test of various outer space modules University of Texas, El Paso Vandenburg AFB (missile launch control) (also two CDC 3600s - main base - use unknown) Thompson-Ramo-Wollridge - Redondo Beach, California Cal Poly, Pomona, California (engineering school) Cal Poly Northridge, California (engineering school) Point Magoo, Naval Air Station (weather balloon tracking/data analysis) El Centro Naval Air Station Camp Pendleton, California (US Marine Corps) - development/test bed for Marine Tactical Aircraft Command and Control System - MTACCS) - Initially 3300 - upgraded to a 3500 about 1973-74. Beverly Hills Data Center - Part of the CDC Cybernet network and also a rarity for CDC - did complete bank computing for about 12 small local banks. A midpoint of the CyberNet system, it interfaced to the Los Angeles Data Center (CDC 6600). The banking accounts were acquired from Corporation for Economic, Industrial and Operations Research (C.E.I.R) - an IBM system (1401?) which was replaced with the 3300. This system had 8 or 10 CDC 854 disk drives and connected to a Burroughs model B-3000 magnetic ink character reader (MICR). Europe CERN used a CDC 3800 from 1966 until January 1969, when it was replaced by a CDC 6600. The 3800 was then acquired by State of Geneva and installed at the local University of Geneva. A number of CDC 3000s were installed in France, either in academic computing centers, in software companies or in the aeronautics, nuclear and oil industries. The University of Oslo had a CDC 3300 (1967 - 1976) One CDC 3300 was installed in the late 1960s in England, in Computation Research & Development Ltd, a London-based subsidiary of the civil engineering designers Freeman Fox & Partners. It was used for engineering calculations and commercial computing. It was de-commissioned in 1974. The German Meteorological Service Deutscher Wetterdienst used a CDC 3800 and CDC 3400 for Numerical weather prediction in late 1960s. Oceania There were many CDC 3000 series machines in Australia. CSIRO had a 3600 installed in Canberra in 1964, with satellite 3200 machines in Melbourne, Sydney, and probably Brisbane and Adelaide. The Bureau of Census and Statistics had a 3600, 3300, and a 3500 in Canberra, with a 3200 in each of the six state capitals several of which were upgraded to 3300s about 1970. Monash University had a 3200 delivered in 1964. Defence Signals Directorate had at least one. BHP had two 3300 machines at each of its Newcastle and Port Kembla Data centres. They were replaced by 3500s in 1977, and remained in operation until the mid-1980s. Communist bloc Communist-ruled Hungary obtained a CDC 3300 machine between 1969 and 1971. It was administered by the National Academy of Sciences throughout the 1970s, mainly for running scientific computations. Other CDC 3300 systems installed in former Eastern Bloc countries (list not complete): Computer research centre Bratislava with contribution of OSN Development program, CRC/UNDP Central Statistical Office, Prague. Romanian Aircraft, Bucharest (3500 System) Standard peripherals 405 - Card reader. 80 column 'high-speed' punched card reader 415 - Card punch, 80 column card punch 501 - Line printer, rotating drum, 136 character wide printer. Note that there were no lower case letters. 505 - Line printer 512 - Line printer, chain type 601 - Magnetic tape drive 604 - Magnetic tape drive 607 - Magnetic tape drive 609 - Magnetic tape drive (9-track) 700 series - Optical page reader 800 - Rotating magnetic memory series 808 - Rotating disks, approx 32 inches per disk, approx 32-36 disks per spindle. 852 - Disk drive with removable media pack, 2 to 3 million seven 7-bit characters, six disks (10 recording surfaces)/pack. Data-compatible with IBM 1311 853 - Similar to 852, 4 million characters. Media mechanically interchangeable with IBM 1311, but not recorded data 854 - Similar to 853, 8 million characters 915 - Optical page reader The CDC 6000 series were also initially marketed with many of the same peripherals. See also Punched card input/output References External links "Bizarre Architecture" lecture series at Univ. of Mass. (pdf) 3000 series documentation at bitsavers.org OS3 documentation at bitsavers.org 3000 series brochure at computerhistory.org CDC printers/peripherals for sale (Ad,1975: CDC 501,505,512 3000 Control Data mainframe computers Transistorized computers 24-bit computers 48-bit computers
16413465
https://en.wikipedia.org/wiki/Corey%20Schou
Corey Schou
Corey Schou is University Professor of Informatics and Associate Dean at Idaho State University, director of the National Information Assurance Training and Education Center (NIATEC) and the Simplot Decision Support Center (SDSC), and for ten years the chair of the Colloquium for Information Systems Security Education (CISSE). In the early 1980s, organizations began to recognize that connected PCs in various locations were much more vulnerable than a mainframe locked away in a single building. These organizations began seeking qualified individuals responsible for selecting, recommending and implementing security policy and procedures. However, few schools were offering information security curricula, much less academic degrees, and organizations would have to take an IT professional at his or her word that they knew how to manage information security for the entire enterprise. By 1989 Schou and others had established a unified common Body of knowledge for computer security. Schou, with Idaho State University hosted the finalization meetings in Salt Lake City. His work was later recognized by the organization with various awards in San Francisco (Founder's award and The need for a professional certification to maintain and validate a common knowledge, values, and ethics for individuals in the industry became a growing concern. Several IT professional societies recognized that a certification program attesting to the qualifications of information security personnel was needed. Schou's work is recognized several organizations such as ISC2 as foundational to the Information Assurance discipline in academia. His work for three decades has resulted in standards used internationally by government, industry and academia. Teaching Schou is a teacher and mentor whose style is described by his students and colleagues as Socratic. At all levels he encourages students to excel. Although he has had a full-service and research agenda, university records show that he has taught at least one class every semester for the past 30 years. He currently heads one of the Scholarship for Service Cyber Corps programs that prepares individuals to be Information Assurance Professionals. In this program all students take a full MBA program. In addition they are exposed to both courses and practicum experiences. Upon completion of the program the graduates have completed all the requirements for certification by the Committee on National Security Systems. The program is one of only three in the nation that is certified at all levels for all certifications CAE. In addition, graduates are expected to sit for the Systems Security Certified Practitioner SSCP and CISSP examinations from (ISC)2. Currently the program has a 100% pass rate on the first try as documented in the university annual report to the National Science Foundation NSF. In 1993 he was the first non-government employee to be recognized as Educator of the Year by the Federal Information Systems Security Educators Association FISSEA Writing He is the author of several books on information assurance called Information Assurance for the Enterprise: A Roadmap to Information Security McGraw Hill Catalog. and over 300 referred papers and monographs. Recent Research Books Schou, C., Lohse, E. (2009). The Crabtree Files, The assembled works and papers of Crabtree and Swanson. Idaho: Idaho Museum of Natural History, IRI. Corey D., Shoemaker, Daniel; Information Assurance for the Enterprise: A Roadmap to Information Security, McGraw Hill, January 2007 Schou, Corey D., Kuhel, D., “Information Operations Education: Lessons Learned from Information Assurance”, Information Warfare Separating Hype from Reality, Edited by Edwin L. Armistead, Potomac Books, Washington DC, 2007 Book Chapter Lohse, S., Schou, C., (2007)The Columbia Plateau-Snake River Region Cultural Sequence. In Projectile Point Sequences in Northwestern North America, edited by Roy Carlson and Marty Magne. Canadian Archaeological Association. Book Chapter Refereed Journal Articles Schou, C., Armistead, E. L., Ryan, J. (2009/10). International Academic Standards: A New Approach to Information Operations for Interoperability. Journal of Information Warfare. Schou, C., et al. (2009/2010) Developing information assurance standards, ACM SIGCSE Bulletin Volume 41, Issue 4 (December 2009), Year of Publication: 2010, Trimmer, K., Parker, K. R., Schou, C. (2009). Functional Requirements for Secure Code: The Reference Monitor and Use Case. Academy of Information and Management Sciences Journal, 12(2), 113-119. Trimmer, K., Parker, K. R., Schou, C. (2007). Forcing Early Implementation of Information Assurance Precepts Throughout the Design Phase. Journal of Informatics Education Research, 9(1), 95-120. Frost, James, Schou, Corey, The Missing Components of the Security Audit - A Case Study, in The Challenge of Managing System Integrity, 2006, . References External links University profile Cyber Corps (ISC)2 FISSEA Catch a clue from an EDU: Universities that get security right Idaho State University faculty Living people Information science Computer security academics Year of birth missing (living people)
1256577
https://en.wikipedia.org/wiki/Tellabs
Tellabs
Tellabs, Inc. is a global network technology provider providing services to both private and governmental agencies. History Tellabs traces its roots to a meeting in 1974 over a kitchen table in suburban Chicago. According to company founder Michael Birck, a group of six men with backgrounds in electrical engineering and sales drank coffee and brainstormed ideas for a new telecom company. They aimed to build a company that offered customers products and services that met their specific needs. After raising $110,000 in capital, they incorporated as Tellabs in the spring of 1975; the name combined the idea of telephones and laboratories. The start-up only had a one-man research department, a second-hand soldering iron picked up for $25, and an outdated oscilloscope. In a matter of months, Tellabs began making echo suppressors, which suppress annoying echoes on phone calls. During this time, the founding partners drew no salaries. The company went public in July 1980, ending the year with sales of $43.7 million. In September 1981, Tellabs introduced the industry's first echo canceller, an advance over the original echo suppressors that synthesized an echo and electronically subtracted it. By 1990, Tellabs had grown to 2,000 employees at 25 locations globally and sales of $211 million. Tellabs made several acquisitions and expanded globally in the 1980s and into the 1990s, including Coherent Communications Systems Corp. and Martis Oy in Finland. In 1991, the company took a new direction, releasing its SONET-based TITAN 5500 digital cross-connect system. These systems switch traffic from one circuit to another, connecting traffic inside and between networks. Richard Notebaert, who had led Ameritech, the Midwestern AT&T spin-off until it was acquired by SBC in 1999, took over Tellabs as CEO in September 2000. Pundits labeled Notebaert the “$6 billion man.” However, as the Chicago Sun-Times also reported, the telecom industry also collapsed. The Chicago Sun-Times reported: “Telecom went from boom to bust as venture capital dried up and customers cancelled orders for the sort of equipment made by Tellabs and its competitors, including Nortel Networks and Lucent Technologies.” In 2003, following industry trends and after 28 years as a manufacturer, Tellabs sold its last plant in Illinois and outsourced its manufacturing. The company continued downsizing. Krish Prabhu, former chief operating officer of Alcatel, took over as CEO in February 2004. Prabhu saw opportunities as Internet use grew and demanded faster connections, as well as video and better VoIP (Voice over Internet Protocol) calling. Tellabs acquired two companies in 2004. They purchased Advanced Fibre Communications (AFC), a provider of broadband access solutions with a customer base of more than 800 service providers worldwide at the time of the merger in a deal worth $1.9 billion, and Marconi Communications North American Access, which sold fiber-access services to regional Bell operating companies and local exchange carriers. At the time of the merger, Marconi had a customer base that included many of the world's largest telecommunications operators, managing more than 4 million lines of capacity and about 1 million deployed lines. By 2007, nearly half of Tellabs' revenue came from products added since 2003. Prabhu also presided over more cutbacks as the telecom industry continued to struggle. In January 2008, Tellabs announced that it was cutting 225 jobs during the year. This would leave Tellabs with about 3,500 jobs, down from a peak of 9,000 during the boom in 2001. Prabhu stepped down in March 2008 for personal reasons; Birck praised him at his departure. Tellabs internally promoted Robert W. Pullen, who had 23 years of varied experience at Tellabs, to succeed Pradhu as chief executive and president effective March 1, 2008. He was chairman of the executive board of Telecommunications Industry Association. In 2009 Tellabs acquired WiChorus, a San-Jose based Silicon Valley start-up with a mobile packet core platform, a decision that led to the decline of their data business. After Pullen was hospitalized in June 2012 due to cancer, Dan Kelly was appointed acting CEO and president; Kelly later assumed the full offices in November 2012, following Pullen's death. Kelly had previously served as executive vice president of global products and had worked with the company for over 25 years. In 2013, Tellabs was acquired by Marlin Equity Partners, a global investment company. Marlin announced plans to establish an independent business for the Tellabs Access product portfolio, aiming to accelerate Tellabs' development of Optical Local Area Network (OLAN) technology. Marlin announced it would target OLAN to Enterprise and Government agencies, while continuing to support and expand its TeleCommunications portfolio. The Optical Transport, Metro Ethernet and Mobility business units of Tellabs were spun off into Coriant, a separate company owned by Marlin Equity. Mike Dagenais, an industry veteran who had previously served as CEO of Radisys Corporation and as president and CEO of Continuous Computing, was tapped to head Tellabs as president and CEO. The focus of Tellabs is now OLAN technology. OLAN uses fiber, which is faster, more secure, and more stable in comparison to traditional copper infrastructure. For service providers, Tellabs Access Platforms provide Broadband access to more than 5 million homes in North America. On October 11, 2017, it was announced that Jim Norrod had been named president and CEO of Tellabs, following the retirement of Mike Dagenais. In December 2019, Tellabs announced that Rich Schroder had been appointed president and Chief Executive Officer. Mr. Schroder has held numerous positions within Tellabs and most recently served as Chief Operating Officer. Schroder is a telecom industry veteran with past experience running global multi-site organizations in Asia, Europe and across the U.S. on behalf of AFC, Marconi, RELTEC, DSC and Siemens. Corporate Tellabs global corporate headquarters is located in the northwest quadrant of Dallas, Texas in the city of Carrollton, Texas. Hardware Optical Line Terminals Optical Line Terminal provides centralized intelligence, management and control of the end-to-end LAN. It also provides aggregation and distribution functions and typically resides in a building's main data center. The goal of the Tellabs 1100 Series OLT equipment is to optimize capital expenditures, energy consumption and space requirements wherever possible. To accomplish that end goal, Tellabs provides a variety of distribution shelves that best match the total number of IP/Ethernet endpoints desired, match power utilization and minimize the physical space required within the main data center. Optical Network Terminals Tellabs designs Optical Network Terminals, aimed for use in high-volume passive optical network (PON) commercial deployments. Tellabs Optical Network Terminals provide narrowband and broadband subscriber services over a PON platform to the IP/Ethernet end-points; all services are supported natively over a single fiber, including analog voice, VoIP, high-speed data, IP video, RF video, smart buildings apps, security, surveillance, environmental and automation for modern high-performance LANs. Software Element Management Tellabs Panorama PON Manager is the cornerstone of an Optical LAN end-to-end system. It provides centralized intelligence and element management across the entire LAN, from OLT to ONT and extends to subtended powered devices. Tellabs describes it's PON Manager as helping define LAN resources in software and then dynamically allocate them, based on real-time requirements. Advanced Software Packages Tellabs Optical LAN solution is designed to improve LAN availability uptime, and increase operational efficiencies and network security. Tellabs offers multiple packages: Advanced Availability Software Package builds LANs that minimize annual network downtime while lowering costs, reducing human error, and improving security; Tellabs Advanced Operational Software Package improves IT efficiencies by increasing the speed of LAN configurations, monitoring, troubleshooting and MACs while once again reducing human error and network security risks; and Tellabs Advanced Security Software Package enhances physical LAN defensibility, enabling consistent protection policies that are centrally managed while continuing to reduce human error and increasing network stability. Services Tellabs offers three services: Professional Network Services, which it describes as providing support at all stages of the network life cycle, Technical Support Services, which offers a variety of technical support agreement options to meet the specific needs of organization's network, and Network Training Services, which offers a variety of network training programs to keep staff up to date to support all phases of network's life cycle. References Companies based in Dallas Telecommunications companies of the United States Telecommunications companies established in 1974 1974 establishments in Illinois 2013 mergers and acquisitions Privately held companies based in Texas Private equity portfolio companies 1980s initial public offerings
2225116
https://en.wikipedia.org/wiki/Packet%20trade
Packet trade
Generally, packet trade is any regularly scheduled cargo, passenger and mail trade conducted by ship. The ships are called "packet boats" as their original function was to carry mail. A "packet ship" was originally a vessel employed to carry post office mail packets to and from British embassies, colonies and outposts. In sea transport, a packet service is a regular, scheduled service, carrying freight and passengers. The ships used for this service are called packet ships or packet boats. The seamen are called packetmen, and the business is called packet trade. "Packet" can mean a small parcel but, originally meant a parcel of important correspondence or valuable items, for urgent delivery.<ref>Oxford English Dictionary - Packet: "A small pack, package, or parcel. In later use freq.: the container or wrapping in which goods are sold; packaging; a bag or envelope for packing something in. Also: the contents of a packet. In early use chiefly used of a parcel of letters or dispatches, esp. the state parcel or mail in which letters to and from foreign countries were carried."</blockquote></ref> The French-language term "paquebot” derives from the English term "packet boat," but means a large ocean liner. This sense became extended to mean any regularly scheduled ship, carrying passengers, as in packet trade. The word "packet" is frequently modified by the destination, e.g. Sydney packet, or by motive force, e.g. "steam packet". History Many states, civilisations and organisations set up mail systems for high value goods, especially confidential correspondence and bullion. In times of war, regular shipments ran the gauntlet of warships and privateers, and even in peacetime, pirates could be a threat on some routes. In 1829, the pirate Mansel Alcantra captured the packet Topaz and murdered her crew after looting her. In Britain, the Post Office Packet Service used small, fast, lightly armed ships to carry state papers to overseas destinations on a regular schedule. This service operated from Tudor times until 1823, when the Admiralty assumed the responsibility for running the service. During the French Revolutionary and Napoleonic Wars the packet ships were targets for privateers and warships seeking prize money. Although some were captured, others managed to fight back. In the 1850s the Post Office moved to using contract carriers. Other European states with overseas colonies also developed packet mail systems. Eventually, however, commercial steam liners began to work regular international schedules and received contracts from governments to carry mail as well as passengers and high-value cargo. Their services retained the name "Packet". Companies Packet shipping companies included: Baltimore Steam Packet Company Black Ball Line (trans-Atlantic packet) City of Dublin Steam Packet Company Cunard Line Isle of Man Steam Packet Company Liverpool Packet Swallowtail Line (trans-Atlantic packet) Woolwich Steam Packet Company Robert Kermits Red Star Line Black Ball line In 1818, ships of the Black Ball line began regularly scheduled trips between Britain and the United States. These "packet ships" (named for their delivery of mail "packets") were infamous for keeping to their disciplined schedules. This often involved harsh treatment of seamen and earned the ships the nickname "bloodboat". The original Black Ball Line was founded by a group of New York Quakers, but later a rival service founded by James Baines of Liverpool also styled itself the Black Ball Line, despite the protests of the original company of that name. By Country United States In the United States, "packet trade" is often used to refer to the Atlantic (or Western) Ocean packets which traded with Europe and Africa (notably Cape Verde). Packet boats, smaller vessels designed for domestic use, also were extensively used in the 19th century for internal mail and scheduled service using rivers and canals, such as along the Erie Canal, which cut travel time across New York state in half; the Pennsylvania Canal, the James River and Kanawha Canal, and navigable rivers. During the 18th century ships carrying cargo, passengers and mail between Europe and America would sail only when they were full. Starting in the early 19th century, as trade with America became more common, schedule regularity became a priority. Australia The first seagoing ship built in Van Diemens Land (in 1812) was named the Henrietta Packet by virtue of the fact that she offered a regular passenger service between Hobart, Tasmania and Sydney, New South Wales. From the 1830s the term "steam packet" was commonly applied to early steam ship services that, at least in theory, offered a regular and reliable service, and is perpetuated today by many waterfront establishments around Australia bearing such names as the "Steam Packet Inn" or "Steam Packet Hotel". Both fast sailing ships and early steam ships holding mail contracts between Great Britain and Australia were also often referred to as packets. These included several ships of James Baines' Black Ball Line and the Orient Line. Netherlands In the late 15th to mid 16th century the Dutch system of the beurtvaart developed, a related system for mostly inland navigation. Ships of the beurtvaart carried passengers, livestock and freight along fixed routes at fixed prices with scheduled departures. Organised by the cities, it grew to an extensive and reliable network over the following century. Some of the cities arranged for international (seagoing) connections as well. In the second half of the 19th century authorities withdrew from intervening and a lot of steamship companies sprang up, offering much the same services. Some carried freight and livestock only, some of them were mainly for passenger transport, becoming more like public transport. They would call their service beurtvaart or occasionally use the word 'packet' as did Alkmaar Packet. Portugal The Azores and Madeira Cape Verde Because of the influence of whaling and several local droughts, there was substantial migration from Cape Verde to America, most notably to New Bedford, Massachusetts. This migration built strong ties between the two locations. A strong packet trade between New England and Cape Verde developed during the early-to-mid-19th century. Modern day In the 21st century, ePacket delivery became available through the US Postal Service as a deal with China Post, Hongkong Post, and the Korean postal service in order to support internet commerce between East Asia and the United States. Vendors in those countries can sell goods directly to American customers while enjoying delivery rates that are often less even than domestic US vendors would pay to deliver the same parcel. These low rates and the current lack of protective tariffs on imported goods from foreign countries make the service controversial because it apparently gives foreign vendors, especially in mainland China and Hong Kong, a competitive advantage in the still-growing online market. The USPS has complained of inability to effectively negotiate prices with the Chinese postal services, and was losing millions of dollars on the service each year. See also Post Office Packet Service Isle of Man Steam Packet Packet boat Further reading Dave Hollett, Fast Passage to Australia: the History of the Black Ball, Eagle and White Star Lines, London, Fairplay, 1986. References Packet (sea transport) Packet (river transport) Maritime history Economic history of the United States Economic history of Australia Economic history of the United Kingdom Cape Verdean American history
3255539
https://en.wikipedia.org/wiki/Rosetta%40home
Rosetta@home
Rosetta@home is a distributed computing project for protein structure prediction on the Berkeley Open Infrastructure for Network Computing (BOINC) platform, run by the Baker laboratory at the University of Washington. Rosetta@home aims to predict protein–protein docking and design new proteins with the help of about fifty-five thousand active volunteered computers processing at over 487,946 GigaFLOPS on average as of September 19, 2020. Foldit, a Rosetta@home videogame, aims to reach these goals with a crowdsourcing approach. Though much of the project is oriented toward basic research to improve the accuracy and robustness of proteomics methods, Rosetta@home also does applied research on malaria, Alzheimer's disease, and other pathologies. Like all BOINC projects, Rosetta@home uses idle computer processing resources from volunteers' computers to perform calculations on individual workunits. Completed results are sent to a central project server where they are validated and assimilated into project databases. The project is cross-platform, and runs on a wide variety of hardware configurations. Users can view the progress of their individual protein structure prediction on the Rosetta@home screensaver. In addition to disease-related research, the Rosetta@home network serves as a testing framework for new methods in structural bioinformatics. Such methods are then used in other Rosetta-based applications, like RosettaDock or the Human Proteome Folding Project and the Microbiome Immunity Project, after being sufficiently developed and proven stable on Rosetta@home's large and diverse set of volunteer computers. Two especially important tests for the new methods developed in Rosetta@home are the Critical Assessment of Techniques for Protein Structure Prediction (CASP) and Critical Assessment of Prediction of Interactions (CAPRI) experiments, biennial experiments which evaluate the state of the art in protein structure prediction and protein–protein docking prediction, respectively. Rosetta@home consistently ranks among the foremost docking predictors, and is one of the best tertiary structure predictors available. With an influx of new users looking to participate in the fight against the COVID-19 pandemic, caused by SARS-CoV-2, Rosetta@home has increased its computing power up to 1.7 PetaFlops as of March 28, 2020. On September 9, 2020, Rosetta@home researchers published a paper describing 10 potent antiviral candidates against SARS-CoV-2. Rosetta@home contributed to this research and these antiviral candidates are heading towards Phase 1 clinical trials, which may begin in early 2022. According to the Rosetta@home team, Rosetta volunteers contributed to the development of a nanoparticle vaccine. This vaccine has been licensed and is known as the IVX-411 by Icosavax, which began a Phase I/II clinical trial in June 2021, and GBP510 which is being developed by SK Bioscience and is already approved for a Phase III clinical trial in South Korea. NL-201, a cancer drug candidate that was first created at the Institute of Protein Design (IPD) and published in a January 2019 paper, began a Phase 1 Human clinical trial in May 2021 with the support of Neoleukin Therapeutics, itself a spin-off from the IPD. Rosetta@home played a role in the development of NL-201 and contributed with "forward folding" experiments that helped validate protein designs. Computing platform The Rosetta@home application and the BOINC distributed computing platform are available for the operating systems Windows, Linux, and macOS; BOINC also runs on several others, e.g., FreeBSD. Participation in Rosetta@home requires a central processing unit (CPU) with a clock speed of at least 500 MHz, 200 megabytes of free disk space, 512 megabytes of physical memory, and Internet connectivity. As of July 20, 2016, the current version of the Rosetta Mini application is 3.73. The current recommended BOINC program version is 7.6.22. Standard Hypertext Transfer Protocol (HTTP) (port 80) is used for communication between the user's BOINC client and the Rosetta@home servers at the University of Washington; HTTPS (port 443) is used during password exchange. Remote and local control of the BOINC client use port 31416 and port 1043, which might need to be specifically unblocked if they are behind a firewall. Workunits containing data on individual proteins are distributed from servers located in the Baker lab at the University of Washington to volunteers' computers, which then calculate a structure prediction for the assigned protein. To avoid duplicate structure predictions on a given protein, each workunit is initialized with a random seed number. This gives each prediction a unique trajectory of descent along the protein's energy landscape. Protein structure predictions from Rosetta@home are approximations of a global minimum in a given protein's energy landscape. That global minimum represents the most energetically favorable conformation of the protein, i.e., its native state. A primary feature of the Rosetta@home graphical user interface (GUI) is a screensaver which shows a current workunit's progress during the simulated protein folding process. In the upper-left of the current screensaver, the target protein is shown adopting different shapes (conformations) in its search for the lowest energy structure. Depicted immediately to the right is the structure of the most recently accepted. On the upper right the lowest energy conformation of the current decoy is shown; below that is the true, or native, structure of the protein if it has already been determined. Three graphs are included in the screensaver. Near the middle, a graph for the accepted model's thermodynamic free energy is displayed, which fluctuates as the accepted model changes. A graph of the accepted model's root-mean-square deviation (RMSD), which measures how structurally similar the accepted model is to the native model, is shown far right. On the right of the accepted energy graph and below the RMSD graph, the results from these two functions are used to produce an energy vs. RMSD plot as the model is progressively refined. Like all BOINC projects, Rosetta@home runs in the background of the user's computer, using idle computer power, either at or before logging into an account on the host operating system. The program frees resources from the CPU as they are needed by other applications so that normal computer use is unaffected. Many program settings can be specified via user account preferences, including: the maximum percentage of CPU resources the program can use (to control power consumption or heat production from a computer running at sustained capacity), the times of day during which the program can run, and many more. Rosetta, the software that runs on the Rosetta@home network, was rewritten in C++ to allow easier development than that allowed by its original version, which was written in Fortran. This new version is object-oriented, and was released on February 8, 2008. Development of the Rosetta code is done by Rosetta Commons. The software is freely licensed to the academic community and available to pharmaceutical companies for a fee. Project significance With the proliferation of genome sequencing projects, scientists can infer the amino acid sequence, or primary structure, of many proteins that carry out functions within the cell. To better understand a protein's function and aid in rational drug design, scientists need to know the protein's three-dimensional tertiary structure. Protein 3D structures are currently determined experimentally via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The process is slow (it can take weeks or even months to figure out how to crystallize a protein for the first time) and costly (around US$100,000 per protein). Unfortunately, the rate at which new sequences are discovered far exceeds the rate of structure determination – out of more than 7,400,000 protein sequences available in the National Center for Biotechnology Information (NCBI) nonredundant (nr) protein database, fewer than 52,000 proteins' 3D structures have been solved and deposited in the Protein Data Bank, the main repository for structural information on proteins. One of the main goals of Rosetta@home is to predict protein structures with the same accuracy as existing methods, but in a way that requires significantly less time and money. Rosetta@home also develops methods to determine the structure and docking of membrane proteins (e.g., G protein–coupled receptors (GPCRs)), which are exceptionally difficult to analyze with traditional techniques like X-ray crystallography and NMR spectroscopy, yet represent the majority of targets for modern drugs. Progress in protein structure prediction is evaluated in the biannual Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, in which researchers from around the world attempt to derive a protein's structure from the protein's amino acid sequence. High scoring groups in this sometimes competitive experiment are considered the de facto standard-bearers for what is the state of the art in protein structure prediction. Rosetta, the program on which Rosetta@home is based, has been used since CASP5 in 2002. In the 2004 CASP6 experiment, Rosetta made history by being the first to produce a close to atomic-level resolution, ab initio protein structure prediction in its submitted model for CASP target T0281. Ab initio modeling is considered an especially difficult category of protein structure prediction, as it does not use information from structural homology and must rely on information from sequence homology and modeling physical interactions within the protein. Rosetta@home has been used in CASP since 2006, where it was among the top predictors in every category of structure prediction in CASP7. These high quality predictions were enabled by the computing power made available by Rosetta@home volunteers. Increasing computing power allows Rosetta@home to sample more regions of conformation space (the possible shapes a protein can assume), which, according to Levinthal's paradox, is predicted to increase exponentially with protein length. Rosetta@home is also used in protein–protein docking prediction, which determines the structure of multiple complexed proteins, or quaternary structure. This type of protein interaction affects many cellular functions, including antigen–antibody and enzyme–inhibitor binding and cellular import and export. Determining these interactions is critical for drug design. Rosetta is used in the Critical Assessment of Prediction of Interactions (CAPRI) experiment, which evaluates the state of the protein docking field similar to how CASP gauges progress in protein structure prediction. The computing power made available by Rosetta@home's project volunteers has been cited as a major factor in Rosetta's performance in CAPRI, where its docking predictions have been among the most accurate and complete. In early 2008, Rosetta was used to computationally design a protein with a function never before observed in nature. This was inspired in part by the retraction of a high-profile paper from 2004 which originally described the computational design of a protein with improved enzymatic activity relative to its natural form. The 2008 research paper from David Baker's group describing how the protein was made, which cited Rosetta@home for the computing resources it made available, represented an important proof of concept for this protein design method. This type of protein design could have future applications in drug discovery, green chemistry, and bioremediation. Disease-related research In addition to basic research in predicting protein structure, docking and design, Rosetta@home is also used in immediate disease-related research. Numerous minor research projects are described in David Baker's Rosetta@home journal. As of February 2014, information on recent publications and a short description of the work are being updated on the forum. The forum thread is no longer used since 2016, and news on the research can be found on the general news section of the project. Alzheimer's disease A component of the Rosetta software suite, RosettaDesign, was used to accurately predict which regions of amyloidogenic proteins were most likely to make amyloid-like fibrils. By taking hexapeptides (six amino acid-long fragments) of a protein of interest and selecting the lowest energy match to a structure similar to that of a known fibril forming hexapeptide, RosettaDesign was able to identify peptides twice as likely to form fibrils as are random proteins. Rosetta@home was used in the same study to predict structures for amyloid beta, a fibril-forming protein that has been postulated to cause Alzheimer's disease. Preliminary but as yet unpublished results have been produced on Rosetta-designed proteins that may prevent fibrils from forming, although it is unknown whether it can prevent the disease. Anthrax Another component of Rosetta, RosettaDock, was used in conjunction with experimental methods to model interactions between three proteins—lethal factor (LF), edema factor (EF) and protective antigen (PA)—that make up anthrax toxin. The computer model accurately predicted docking between LF and PA, helping to establish which domains of the respective proteins are involved in the LF–PA complex. This insight was eventually used in research resulting in improved anthrax vaccines. Herpes simplex virus 1 RosettaDock was used to model docking between an antibody (immunoglobulin G) and a surface protein expressed by the cold sore virus, herpes simplex virus 1 (HSV-1) which serves to degrade the antiviral antibody. The protein complex predicted by RosettaDock closely agreed with the especially difficult-to-obtain experimental models, leading researchers to conclude that the docking method has potential to address some of the problems that X-ray crystallography has with modelling protein–protein interfaces. HIV As part of research funded by a $19.4 million grant by the Bill & Melinda Gates Foundation, Rosetta@home has been used in designing multiple possible vaccines for human immunodeficiency virus (HIV). Malaria In research involved with the Grand Challenges in Global Health initiative, Rosetta has been used to computationally design novel homing endonuclease proteins, which could eradicate Anopheles gambiae or otherwise render the mosquito unable to transmit malaria. Being able to model and alter protein–DNA interactions specifically, like those of homing endonucleases, gives computational protein design methods like Rosetta an important role in gene therapy (which includes possible cancer treatments). COVID-19 Rosetta molecular modelling suite was recently used to accurately predict the atomic-scale structure of the SARS-CoV-2 spike protein weeks before it could be measured in the lab. On June 26 of 2020, the project announced it had succeeded in creating antiviral proteins that neutralize SARS-CoV-2 virions in the lab and that these experimental antiviral drugs are being optimized for animal testing trials. In a follow-up, a paper describing 10 SARS-CoV-2 miniprotein inhibitors was published in Science on September 9. Two of these inhibitors, LCB1 and LCB3, are several times more potent than the best monoclonal antibodies being developed against SARS-CoV-2, both on a molar and mass basis. In addition, the research suggests that these inhibitors retain their activity at elevated temperatures, are 20-fold smaller than an antibody and thus, have 20-fold more potential neutralizing sites, increasing the potential efficacy of a locally administered drug. The small size and high stability of the inhibitors is expected to make them adequate to a gel formulation that can be nasally applied or as a powder to be administered directly onto the respiratory system. The researchers will work on developing these inhibitors into therapeutics and prophylactics in the months ahead. As of July 2021, these antiviral candidates were forecasted to begin clinical trials in early 2022 and had received funding from the Bill & Melinda Gates Foundation for preclinical and early clinical trials. In animal testing trials, these antiviral candidates were effective against variants of concern including Alpha, Beta and Gamma. Rosetta@home was used to help screen the over 2 million SARS-CoV-2 Spike-binding proteins that were computationally designed, and thus, contributed to this research. Per the Rosetta@home team at the Institute of Protein Design, Rosetta@home volunteers contributed to the development of antiviral drug candidates and to a protein nanoparticle vaccine. The IVX-411 vaccine is already on a Phase 1 clinical trial run by Icosavax while the same vaccine, licensed to another manufacturer and under the name GBP510, has been approved in South Korea for a Phase III trial run by SK Bioscience. The candidate antivirals are also going towards Phase 1 clinical trials. Cancer Rosetta@home researchers have designed an IL-2 receptor agonist called Neoleukin-2/15 that does not interact with the alpha subunit of the receptor. Such immunity signal molecules are useful in cancer treatment. While the natural IL-2 suffers from toxicity due to an interaction with the alpha subunit, the designed protein is much safer, at least in animal models. Rosetta@home contributed in "forward folding experiments" which helped validate designs. In a September 2020 feature in the New Yorker, David Baker stated that Neoleukin-2/15 would begin human clinical trials "later this year". Neoleukin-2/15 is being developed by Neoleukin, a spin-off company from the Baker lab. In December 2020, Neoleukin announced it would be submitting an Investigational New Drug application with the Food and Drug Administration in order to begin a Phase 1 clinical trial of NL-201, which is a further development of Neoleukin-2/15. A similar application was submitted in Australia and Neoleukin hopes to enrol up 120 participants on the Phase 1 clinical trial. The Phase 1 human clinical trial began on May 5, 2021. Development history and branches Originally introduced by the Baker laboratory in 1998 as an ab initio approach to structure prediction, Rosetta has since branched into several development streams and distinct services. The Rosetta platform derives its name from the Rosetta Stone, as it attempts to decipher the structural "meaning" of proteins' amino acid sequences. More than seven years after Rosetta's first appearance, the Rosetta@home project was released (i.e., announced as no longer beta) on October 6, 2005. Many of the graduate students and other researchers involved in Rosetta's initial development have since moved to other universities and research institutions, and subsequently enhanced different parts of the Rosetta project. RosettaDesign RosettaDesign, a computing approach to protein design based on Rosetta, began in 2000 with a study in redesigning the folding pathway of Protein G. In 2002 RosettaDesign was used to design Top7, a 93-amino acid long α/β protein that had an overall fold never before recorded in nature. This new conformation was predicted by Rosetta to within 1.2 Å RMSD of the structure determined by X-ray crystallography, representing an unusually accurate structure prediction. Rosetta and RosettaDesign earned widespread recognition by being the first to design and accurately predict the structure of a novel protein of such length, as reflected by the 2002 paper describing the dual approach prompting two positive letters in the journal Science, and being cited by more than 240 other scientific articles. The visible product of that research, Top7, was featured as the RCSB PDB's 'Molecule of the Month' in October 2006; a superposition of the respective cores (residues 60–79) of its predicted and X-ray crystal structures are featured in the Rosetta@home logo. Brian Kuhlman, a former postdoctoral associate in David Baker's lab and now an associate professor at the University of North Carolina, Chapel Hill, offers RosettaDesign as an online service. RosettaDock RosettaDock was added to the Rosetta software suite during the first CAPRI experiment in 2002 as the Baker laboratory's algorithm for protein–protein docking prediction. In that experiment, RosettaDock made a high-accuracy prediction for the docking between streptococcal pyogenic exotoxin A and a T cell-receptor β-chain, and a medium accuracy prediction for a complex between porcine α-amylase and a camelid antibody. While the RosettaDock method only made two acceptably accurate predictions out of seven possible, this was enough to rank it seventh out of nineteen prediction methods in the first CAPRI assessment. Development of RosettaDock diverged into two branches for subsequent CAPRI rounds as Jeffrey Gray, who laid the groundwork for RosettaDock while at the University of Washington, continued working on the method in his new position at Johns Hopkins University. Members of the Baker laboratory further developed RosettaDock in Gray's absence. The two versions differed slightly in side-chain modeling, decoy selection and other areas. Despite these differences, both the Baker and Gray methods performed well in the second CAPRI assessment, placing fifth and seventh respectively out of 30 predictor groups. Jeffrey Gray's RosettaDock server is available as a free docking prediction service for non-commercial use. In October 2006, RosettaDock was integrated into Rosetta@home. The method used a fast, crude docking model phase using only the protein backbone. This was followed by a slow full-atom refinement phase in which the orientation of the two interacting proteins relative to each other, and side-chain interactions at the protein–protein interface, were simultaneously optimized to find the lowest energy conformation. The vastly increased computing power afforded by the Rosetta@home network, combined with revised fold-tree representations for backbone flexibility and loop modeling, made RosettaDock sixth out of 63 prediction groups in the third CAPRI assessment. Robetta The Robetta (Rosetta Beta) server is an automated protein structure prediction service offered by the Baker laboratory for non-commercial ab initio and comparative modeling. It has participated as an automated prediction server in the biannual CASP experiments since CASP5 in 2002, performing among the best in the automated server prediction category. Robetta has since competed in CASP6 and 7, where it did better than average among both automated server and human predictor groups. It also participates in the CAMEO3D continuous evaluation. In modeling protein structure as of CASP6, Robetta first searches for structural homologs using BLAST, PSI-BLAST, and 3D-Jury, then parses the target sequence into its individual domains, or independently folding units of proteins, by matching the sequence to structural families in the Pfam database. Domains with structural homologs then follow a "template-based model" (i.e., homology modeling) protocol. Here, the Baker laboratory's in-house alignment program, K*sync, produces a group of sequence homologs, and each of these is modeled by the Rosetta de novo method to produce a decoy (possible structure). The final structure prediction is selected by taking the lowest energy model as determined by a low-resolution Rosetta energy function. For domains that have no detected structural homologs, a de novo protocol is followed in which the lowest energy model from a set of generated decoys is selected as the final prediction. These domain predictions are then connected together to investigate inter-domain, tertiary-level interactions within the protein. Finally, side-chain contributions are modeled using a protocol for Monte Carlo conformational search. In CASP8, Robetta was augmented to use Rosetta's high resolution all-atom refinement method, the absence of which was cited as the main cause for Robetta being less accurate than the Rosetta@home network in CASP7. In CASP11, a way to predict the protein contact map by co-evolution of residues in related proteins called GREMLIN was added, allowing for more de novo fold successes. Foldit On May 9, 2008, after Rosetta@home users suggested an interactive version of the distributed computing program, the Baker lab publicly released Foldit, an online protein structure prediction game based on the Rosetta platform. , Foldit had over 59,000 registered users. The game gives users a set of controls (for example, shake, wiggle, rebuild) to manipulate the backbone and amino acid side chains of the target protein into more energetically favorable conformations. Users can work on solutions individually as soloists or collectively as evolvers, accruing points under either category as they improve their structure predictions. Comparison to similar distributed computing projects There are several distributed computed projects which have study areas similar to those of Rosetta@home, but differ in their research approach: Folding@home Of all the major distributed computing projects involved in protein research, Folding@home is the only one not using the BOINC platform. Both Rosetta@home and Folding@home study protein misfolding diseases such as Alzheimer's disease, but Folding@home does so much more exclusively. Folding@home almost exclusively uses all-atom molecular dynamics models to understand how and why proteins fold (or potentially misfold, and subsequently aggregate to cause diseases). In other words, Folding@home's strength is modeling the process of protein folding, while Rosetta@home's strength is computing protein design and predicting protein structure and docking. Some of Rosetta@home's results are used as the basis for some Folding@home projects. Rosetta provides the most likely structure, but it is not definite if that is the form the molecule takes or whether or not it is viable. Folding@home can then be used to verify Rosetta@home's results, and can provide added atomic-level information, and details of how the molecule changes shape. The two projects also differ significantly in their computing power and host diversity. Averaging about 6,650 teraFLOPS from a host base of central processing units (CPUs), graphics processing units (GPUs), and (formerly) PS3s, Folding@home has nearly 108 times more computing power than Rosetta@home. World Community Grid Both Phase I and Phase II of the Human Proteome Folding Project (HPF), a subproject of World Community Grid, have used the Rosetta program to make structural and functional annotations of various genomes. Although he now uses it to create databases for biologists, Richard Bonneau, head scientist of the Human Proteome Folding Project, was active in the original development of Rosetta at David Baker's laboratory while obtaining his PhD. More information on the relationship between the HPF1, HPF2 and Rosetta@home can be found on Richard Bonneau's website. Predictor@home Like Rosetta@home, Predictor@home specialized in protein structure prediction. While Rosetta@home uses the Rosetta program for its structure prediction, Predictor@home used the dTASSER methodology. In 2009, Predictor@home shut down. Other protein related distributed computing projects on BOINC include QMC@home, Docking@home, POEM@home, SIMAP, and TANPAKU. RALPH@home, the Rosetta@home alpha project which tests new application versions, work units, and updates before they move on to Rosetta@home, runs on BOINC also. Volunteer contributions Rosetta@home depends on computing power donated by individual project members for its research. , about 53,000 users from 150 countries were active members of Rosetta@home, together contributing idle processor time from about 54,800 computers for a combined average performance of over 1.7 PetaFLOPS. Users are granted BOINC credits as a measure of their contribution. The credit granted for each workunit is the number of decoys produced for that workunit multiplied by the average claimed credit for the decoys submitted by all computer hosts for that workunit. This custom system was designed to address significant differences between credit granted to users with the standard BOINC client and an optimized BOINC client, and credit differences between users running Rosetta@home on Windows and Linux operating systems. The amount of credit granted per second of CPU work is lower for Rosetta@home than most other BOINC projects. Rosetta@home is thirteenth out of over 40 BOINC projects in terms of total credit. Rosetta@home users who predict protein structures submitted for the CASP experiment are acknowledged in scientific publications regarding their results. Users who predict the lowest energy structure for a given workunit are featured on the Rosetta@home homepage as Predictor of the Day, along with any team of which they are a member. A User of the Day is chosen randomly each day to be on the homepage also, from among users who have made a Rosetta@home profile. References External links Baker Lab Baker Lab website David Baker's Rosetta@home journal BOINC Includes platform overview, and a guide to install BOINC and attach to Rosetta@home BOINCstats – Rosetta@home Detailed contribution statistics RALPH@home Website for Rosetta@home alpha testing project Rosetta@home video on YouTube Overview of Rosetta@home given by David Baker and lab members Rosetta Commons Academic collaborative for developing the Rosetta platform Kuhlman lab webpage, home of RosettaDesign Online Rosetta services Rosetta Commons list of available servers Robetta Protein structure prediction server ROSIE Docking, design, etc. multifunctional server-set RosettaDesign Protein design server RosettaBackrub Flexible backbone / protein design server Berkeley Open Infrastructure for Network Computing projects Bioinformatics software Distributed computing projects Protein structure Molecular modelling Proprietary cross-platform software
1813309
https://en.wikipedia.org/wiki/List%20of%20minerals%20recognized%20by%20the%20International%20Mineralogical%20Association%20%28N%29
List of minerals recognized by the International Mineralogical Association (N)
N Nabalamprophyllite (lamprophyllite, seidozerite: IMA2001-060) 9.BE.25 Nabaphite (IMA1981-058) 8.CJ.15 (IUPAC: sodium barium phosphate nonahydrate) Nabateaite (IMA2021-026) [no] [no] Nabesite (IMA2000-024) 9.EA.65 [no] (IUPAC: disodium beryllium decaoxytetrasilicate tetrahydrate) Nabiasite (IMA1997-050) 8.BF.20 (IUPAC: barium nonamanganese dihydro hexavanadate) Nabimusaite (nabimusaite, arctite: IMA2012-057) 9.A?. [no] [no] (IUPAC: potassium dodecacalcium tetra(tetraoxysilicate) disulfate dioxofluoride) Nabokoite (IMA1985-013a) 7.BC.20 (IUPAC: potassium heptacopper oxochloro tellurate(IV) pentasulfate) Nacaphite (IMA1979-026) 8.BO.05 (IUPAC: disodium calcium fluoro phosphate) Nacareniobsite-(Ce) (rinkite, seidozerite: IMA1987-040) 9.BE.20 Nacrite (Y: 1807) 9.ED.05 (IUPAC: dialuminium pentaoxydisilicate tetrahydroxyl) Nadorite (nadorite: 1870) 3.DC.30 (IUPAC: lead antimony(III) dioxochloride) Nafertisite (IMA1994-007) 9.EH.30 [no] Nagashimalite (IMA1977-045) 9.CE.20 Nagelschmidtite (IMA1987 s.p., 1942) 9.HA.60 (IUPAC: heptacalcium di(tetraoxosilicate) diphosphate) Nagyágite (Y: 1845) 2.HB.20a Nahcolite (Y: 1929) 5.AA.15 (IUPAC: sodium bicarbonate) Nahpoite (IMA1981-002) 8.AD.05 (IUPAC: disodium hydroxophosphate) Nakauriite (IMA1976-016) 7.DG.30 (IUPAC: octacopper hexahydro tetrasulfate carbonate octatetracontahydrate) Nakkaalaaqite (IMA2020-059) 9.CJ. [no] [no] Naldrettite (IMA2004-007) 2.AC.25d (IUPAC: dipalladium antimonide) Nalipoite (IMA1990-030) 8.AA.25 (IUPAC: sodium dilithium phosphate) Nalivkinite (IMA2006-038) 9.DC.05 [no] Namansilite (IMA1989-026) 9.DA.25 (IUPAC: sodium manganese(III) hexaoxydisilicate) Nambulite (rhodonite: IMA1971-032) 9.DK.05 (IUPAC: lithium tetramanganese(II) tetradecaoxypentasilicate hydroxyl) Namibite (IMA1981-024) 8.BB.50 (IUPAC: copper dioxobismuth hydro vanadate) Namuwite (ktenasite: IMA1981-020) 7.DD.50 (IUPAC: tetrazinc hexahydro sulfate tetrahydrate) Nanlingite (IMA1985-xxx ?, 1976) 4.JB.25 Nanpingite (mica: IMA1987-006) 9.EC.15 (IUPAC: cesium dialuminium (aluminotrisilicate) decaoxydedihydroxyl) Nantokite (sphalerite: 1867) 3.AA.05 (IUPAC: copper chloride) Naquite (silicide: IMA2010-010) 1.BB.15 [no] (IUPAC: iron silicide) Narsarsukite (IMA1967 s.p., 1901) 9.DJ.05 Nashite (IMA2011-105) 8.0 [no] Nasinite (IMA1967 s.p., 1961) 6.EC.05 (IUPAC: disodium hydro octaoxodiborate dihydrate) NasledoviteQ (alumohydrocalcite: 1959) 5.DB.05 Nasonite (Y: 1899) 9.BE.77 (IUPAC: dicalcium hexalead tri(heptaoxydisilicate) dichloride) Nastrophite (IMA1980-051) 8.CJ.15 (IUPAC: sodium strontium phosphate nonahydrate) Nataliakulikite (perovskite: IMA2018-061) 9.0 [no] [no] Nataliyamalikite (IMA2016-022) 3.0 [no] [no] (IUPAC: thallium(I) iodide) Natalyite (pyroxene: IMA1984-053) 9.DA.25 (IUPAC: sodium vanadium(III) hexaoxydisilicate) Natanite (perovskite, schoenfliesite: IMA1980-028) 4.FC.10 (IUPAC: iron(II) tin(IV) hexahydroxide) Natisite (IMA1974-035) 9.AG.40a Natrite (IMA1981-005) 5.AA.10 (IUPAC: disodium carbonate) Natroalunite (alunite: IMA1987 s.p., 1902 Rd) 7.BC.10 (IUPAC: sodium trialuminium hexahydro disulfate) Natroaphthitalite (aphthitalite: IMA2018-091) 7.0 [no] [no] (IUPAC: potassium trisodium disulfate) Natroboltwoodite (IMA2007 s.p., 1975) 9.AK.15 (IUPAC: sodium uranyl (trioxyhydroxyl silicate) monohydrate) Natrochalcite (tsumcorite: 1908) 7.DF.15 (IUPAC: sodium dicopper hydro disulfate monohydrate) Natrodufrénite (dufrénite: IMA1981-033) 8.DK.15 (IUPAC: sodium iron(II) pentairon(III) hexahydro tetraphosphate dihydrate) NatroglaucoceriniteQ (woodwardite: IMA1995-025) 7.DD.35 [no] Natrojarosite (alunite, alunite: IMA1987 s.p., 1902 Rd) 7.BC.10 (IUPAC: sodium triiron(III) hexahydro disulfate) NatrokomaroviteN (komarovite: 1979) 9.CE.45 [no] Natrolemoynite (lemoynite: IMA1996-063) 9.DP.35 [no] (IUPAC: tetrasodium dizirconium decasilicate hexacosaoxy hydrate) Natrolite (zeolitic tectosilicate: IMA1997 s.p., 1803) 9.GA.05 Natromarkeyite (markeyite: IMA2018-152) 5.0 [no] [no] Natron (IMA1967 s.p., 1783) 5.CB.10 (IUPAC: disodium carbonate decahydrate) Natronambulite (rhodonite: IMA1981-034) 9.DK.05 NatroniobiteQ (oxide perovskite: 1962) 4.CC.30 (IUPAC: sodium niobium trioxide) Natropalermoite (carminite: IMA2013-118) 8.0 [no] [no] (IUPAC: disodium strontium tetraluminium tetrahydro tetraphosphate) Natropharmacoalumite (pharmacosiderite: IMA2010-009) 8.DK.12 [no] (IUPAC: sodium tetraluminium tetrahydro triarsenate tetrahydrate) Natropharmacosiderite (pharmacosiderite: IMA1983-025) 8.DK.10 (IUPAC: disodium tetrairon(III) pentahydro triarsenate heptahydrate) Natrophilite (olivine: 1890) 8.AB.10 (IUPAC: sodium manganese(II) phosphate) Natrophosphate (IMA1971-041) 8.DN.05 (IUPAC: heptasodium fluoro diphosphate nonadecahydrate) Natrosilite (IMA1974-043) 9.EE.40 (IUPAC: disodium pentaoxydisilicate) Natrosulfatourea (IMA2019-134) 10.0 [no] [no] Natrotantite (IMA1980-026) 4.DJ.05 (IUPAC: disodium tetratantalium undecaoxide) Natrotitanite (titanite: IMA2011-033) 9.AG.15 [no] Natrouranospinite (natroautunite: IMA2007 s.p., 1957) 8.EB.15 (IUPAC: disodium diuranyl diarsenate pentahydrate) Natrowalentaite (walentaite: IMA2018-032a) 8.0 [no] [no] Natroxalate (oxalate: IMA1994-053) 10.AB.60 (IUPAC: disodium oxalate) Natrozippeite (zippeite: IMA1971-004) 7.EC.05 (IUPAC: pentasodium octauranyl trihydro pentaoxo tetrasulfate dodecahydrate) Naujakasite (Y: 1933) 9.EG.10 Naumannite (Y: 1828) 2.BA.55 (IUPAC: disilver selenide) Navajoite (ajoite: 1954) 4.HG.30 Navrotskyite (IMA2019-026) 7.0 [no] [no] Nazarovite (phosphide: IMA2019-013) 1.0 [no] [no] Nchwaningite (IMA1994-002) 9.DB.30 [no] (IUPAC: dimanganese trioxosilicate dihydroxyl monohydrate) Nealite (IMA1979-050) 4.JD.05 [no] (IUPAC: tetralead iron tetrachloro diarsenite dihydrate) Nechelyustovite (seidozerite, lamprophyllite: IMA2006-021) 09.BE.55 [no] Nefedovite (IMA1982-048) 8.BO.30 (IUPAC: pentasodium tetracalcium fluoro tetraphosphate) Negevite (phosphide: IMA2013-104) 1.0 [no] [no] (IUPAC: nickel diphosphide) Neighborite (perovskite: IMA1967 s.p., 1961) 3.AA.35 (IUPAC: sodium magnesium trifluoride) Nekoite (Y: 1956) 9.EA.45 (IUPAC: tricalcium pentadecaoxyhexasilicate heptahydrate) Nekrasovite (germanite: IMA1983-051) 2.CB.30 (Cu13VSn3S16) Nelenite (pyrosmalite: IMA1982-011) 9.EE.15 Neltnerite (braunite: IMA1979-059) 9.AG.05 (IUPAC: calcium hexamagnesium(III) octaoxy(tetraoxysilicate)) Nenadkevichite (labuntsovite: 1955) 9.CE.30a Neotocite (allophane: 1849) 9.ED.20 Nepheline (feldspathoid, nepheline: 1801) 9.FA.05 (IUPAC: potassium trisodium (hexadecaoxytetralumotetrasilicate)) Népouite (serpentine: 1907) 9.ED.15 (IUPAC: trinickel pentaoxodisilicate tetrahydroxyl) Nepskoeite (IMA1996-016) 3.BD.20 [no] (IUPAC: tetramagnesium heptahydroxide chloride hexahydrate) Neptunite (neptunite: 1893) 9.EH.05 Neskevaaraite-Fe (labuntsovite: IMA2002-007) 9.CE.30h Nesquehonite (Y: 1890) 5.CA.05 (IUPAC: magnesium carbonate trihydrate) Nestolaite (IMA2013-074) 4.0 [no] [no] (IUPAC: calcium selenite monohydrate) Neustädtelite (IMA1998-016) 8.BK.10 Nevadaite (IMA2002-035) 8.DC.60 Nevskite (tetradymite: IMA1983-026) 2.DC.05 (IUPAC: bismuth (selenide,sulfide)) Newberyite (Y: 1879) 8.CE.10 (IUPAC: magnesium hydroxophosphate trihydrate) Neyite (sulphosalt, neyite: IMA1968-017) 2.JB.25i (Ag2Cu6Pb25Bi26S68) Nežilovite (magnetoplumbite: IMA1994-020) 4.CC.45 [no] Niahite (IMA1977-022) 8.CH.20 (IUPAC: ammonium manganese(II) phosphate monohydrate) Niasite (IMA2019-105) 8.0 [no] [no] () NichromiteN (spinel: 1978) 4.BB.05 [no] Nickel (IMA1966-039) 1.AA.05 NickelalumiteN (chalcoalumite: 1980) 7.DD.75 [no] Nickelaustinite (adelite: IMA1985-002) 8.BH.35 (IUPAC: calcium nickel hydro arsenate) Nickelbischofite (IMA1978-056) 3.BB.20 (IUPAC: nickel dichloride hexahydrate) Nickelblödite (blödite: IMA1976-014) 7.CC.50 (IUPAC: disodium nickel disulfate tetrahydrate) Nickelboussingaultite (picromerite: IMA1975-037) 7.CC.60 (IUPAC: diammonium nickel disulfate hexahydrate) Nickelhexahydrite (hexahydrite: IMA1968 s.p., 1965) 7.CB.25 (IUPAC: nickel sulfate hexahydrate) Nickeline (nickeline: IMA1967 s.p., 1832) 2.CC.05 (IUPAC: nickel arsenide) Nickellotharmeyerite (tsumcorite: IMA1999-008) 8.CG.15 [no] (IUPAC: calcium dinickel diarsenate dihydrate) Nickelphosphide (phosphide: IMA1998-023) 1.BD.05 (IUPAC: trinickel phosphide) Nickelpicromerite (picromerite: IMA2012-053) 7.CC. [no] [no] (IUPAC: dipotassium nickel disulfate hexahydrate) Nickelschneebergite (tsumcorite: IMA1999-028) 8.CG.15 (IUPAC: bismuth dinickel hydro diarsenate hydrate) Nickelskutterudite (perovskite, skutterudite: IMA2007 s.p., 1893) 2.EC.05 Nickeltalmessite (fairfieldite: IMA2008-051) 8.CG.35 [no] (IUPAC: dicalcium nickel diarsenate dihydrate) Nickeltsumcorite (tsumcorite: IMA2013-117) 8.0 [no] [no] Nickeltyrrellite (seleniospinel: IMA2018-110) 2.0 [no] [no] (IUPAC: copper dinickel tetraselenide) Nickelzippeite (zippeite: IMA1971-005) 7.EC.05 (IUPAC: dinickel hexauranyl decahydro trisulfate hexadecahydrate) Nickenichite (alluaudite: IMA1992-014) 8.AC.10 () Nickolayite (phosphide: IMA2018-126) 1.0 [no] [no] (IUPAC: iron molybdenum phosphide) Nicksobolevite (IMA2012-097) 4.0 [no] [no] (IUPAC: heptacopper dioxohexachloro diselenite) Niedermayrite (IMA1997-024) 7.DD.30 (IUPAC: tetracopper cadmium hexahydro disulfate tetrahydrate) Nielsbohrite (IMA2002-045b) 8.0 Nielsenite (alloy: IMA2004-046) 1.AG.70 (IUPAC: palladium tricopper alloy) Nierite (nitride: IMA1994-032) 1.DB.05 (IUPAC: trisilicon tetranitride) Nifontovite (IMA1967 s.p., 1961) 6.CA.50 (IUPAC: tricalcium hexa[dihydroxoborate] dihydrate) Niggliite (tin alloy: 1938) 1.AG.60 (IUPAC: platinum stannide) Niigataite (epidote, clinozoisite: IMA2001-055) 9.BG.05 [no] (IUPAC: calcium strontium trialuminium (heptaoxodisilicate) (tetraoxosilicate) hydroxyl) Nikischerite (hydrotalcite, wermlandite: IMA2001-039) 7.DD.35 [no] Nikmelnikovite (garnet: IMA2018-043) 4.0 [no] [no] Niksergievite (IMA2002-036) 9.EC.75 [no] Nimite (chlorite: IMA1971 s.p., IMA1969-012) 9.EC.55 Ningyoite (rhabdophane: IMA1962 s.p., 1959) 8.CJ.85 Niningerite (rocksalt, galena: IMA1966-036) 2.CD.10 (IUPAC: magnesium sulfide) Nioboaeschynite (aeschynite) 4.DF.05 Nioboaeschynite-(Ce) (IMA1987 s.p., 1960) 4.DF.05 Nioboaeschynite-(Y) (IMA2003-038a) 4.DF.05 Niobocarbide (carbide: IMA1995-035) 1.BA.20 (IUPAC: niobium carbide) Nioboheftetjernite (wolframite: IMA2019-133) 4.0 [no] [no] Nioboholtite (dumortierite: IMA2012-068) 9.A?. [no] [no] Niobokupletskite (astrophyllite, kupletskite: IMA1999-032) 9.DC.05 [no] Niobophyllite (astrophyllite, astrophyllite: IMA1964-001) 9.DC.05 Niocalite (wöhlerite: 1956) 9.BE.17 (IUPAC: heptacalcium niobium heptaoxodisilicate trioxofluoride) Nipalarsite (IMA2018-075) 2.0 [no] [no] (IUPAC: octanickel tripalladium tetrarsenide) Nisbite (löllingite: IMA1969-017) 2.EB.15a (IUPAC: nickel diantimonide) Nishanbaevite (IMA2019-012) 8.0 [no] [no] Nisnite (auricupride: IMA2009-083) 1.0 [no] [no] (IUPAC: trinickel tin alloy) Nissonite (IMA1966-026) 8.DC.05 (IUPAC: dicopper dimagnesium dihydro diphosphate pentahydrate) Niter (nitrate: old) 5.NA.10 (IUPAC: potassium nitrate) Nitratine (calcite: IMA1980 s.p., 1845) 5.NA.05 (IUPAC: sodium nitrate) Nitrobarite (nitrate: 1882) 5.NA.20 (IUPAC: barium dinitrate) Nitrocalcite (nitrate: 1783) 5.NC.10 (IUPAC: calcium dinitrate tetrahydrate) Nitromagnesite (nitrate: 1783) 5.NC.05 (IUPAC: magnesium dinitrate hexahydrate) Nitroplumbite (nitrate: IMA2021-045a) [no] [no] Nitscheite (IMA2020-078) 7.0 [no] [no] Niveolanite (IMA2007-032) 5.DC.35 (IUPAC: sodium beryllium hydro carbonate dihydrate) Nixonite (IMA2018-133) 4.0 [no] [no] Nizamoffite (hopeite: IMA2012-076) 8.0 [no] [no] Nobleite (IMA1967 s.p., 1961) 6.FC.05 Noelbensonite (IMA1994-058 Rd) 9.BE.05 [no] Nöggerathite-(Ce) (zirconolite: IMA2017-107) 4.0 [no] [no] Nolanite (nolanite: 1957) 4.CB.40 Nollmotzite (IMA2017-100) 4.0 [no] [no] Nolzeite (IMA2014-086) 9.0 [no] [no] Nontronite (montmorillonite, smectite: IMA1962 s.p., 1928) 9.EC.40 Noonkanbahite (batisite: IMA2009-059) 9.DH. [no] Norbergite (humite: 1926) 9.AF.40 Nordenskiöldine (Y: 1887) 6.AA.15 (IUPAC: calcium tin(IV) diborate) Nordgauite (nordgauite: IMA2010-040) 8.DC.30 [no] Nordite (nordite) 9.DO.15 Nordite-(Ce) (IMA1966 s.p., 1958) 9.DO.15 [no] Nordite-(La) (IMA1987 s.p., 1941) 9.DO.15 Nordstrandite (IMA1967 s.p., 1958) 4.FE.10 (IUPAC: aluminium trihydroxide) Nordströmite (IMA1978-073) 2.JB.25c (Pb3CuBi7(S,Se)14) Norilskite (alloy: IMA2015-008) 1.0 [no] [no] (IUPAC: hepta(palladium,silver) tetralead alloy) Normandite (wöhlerite: IMA1990-021) 9.BE.17 [no] Norrishite (mica: IMA1989-019) 9.EC.20 (IUPAC: potassium lithium dimanganese(III) dioxy dedecaoxytetrasilicate) Norsethite (IMA1962 s.p.) 5.AB.30 (IUPAC: barium magnesium dicarbonate) Northstarite (IMA2019-031) 7.0 [no] [no] Northupite (northupite: 1895) 5.BF.05 (IUPAC: trisodium magnesium chloro dicarbonate) Nosean (sodalite: 1815) 9.FB.10 Nováčekite 8.EB.05 Nováčekite-I (IMA2007 s.p., 1951) 8.EB.05 [no] [no] (IUPAC: magnesium diuranyl diarsenate dodecahydrate) Nováčekite-II (IMA2007 s.p., 1951) 8.EB.05 (IUPAC: magnesium diuranyl diarsenate decahydrate) Novákite (metalloid alloy: IMA1967 s.p., 1961) 2.AA.15 () Novgorodovaite (oxalate: IMA2000-039) 10.AB.80 [no] (IUPAC: dicalcium dichloro oxalate dihydrate) Novodneprite (khatyrkite: IMA2002-032a) 1.AA.15 [no] (IUPAC: gold trilead alloy) Novograblenovite (carnallite: IMA2017-060) 3.0 [no] [no] (IUPAC: (ammonium,potassium) magnesium trichloride hexahydrate) Nowackiite (nowackiite: IMA1971 s.p., 1965) 2.GA.30 (Cu6Zn3As4S12) Nsutite (ramsdellite: IMA1967 s.p., 1962) 4.DB.15c Nuffieldite (kobellite: IMA1967-003) 2.JB.25g (Cu1.4Pb2.4Bi2.4Sb0.2S7) Nukundamite (IMA1978-037) 2.CA.10 (Cu3.4Fe0.6S4) Nullaginite (malachite: IMA1978-011) 5.BA.10 (IUPAC: dinickel dihydro carbonate) Numanoite (IMA2005-050) 6.DA.40 Nuragheite (IMA2013-088) 7.0 [no] [no] (IUPAC: thorium dimolybdate monohydrate) Nuwaite (nuwaite: IMA2013-018) 2.0 [no] [no] (IUPAC: hexanickel germanium disulfide) Nyboite [Na-amphibole: IMA2012 s.p., IMA1997 s.p., 1981] 9.DE.25 Nyerereite (fairchildite: IMA1963-014) 5.AC.10 (IUPAC: disodium calcium dicarbonate) Nyholmite (hureaulite: IMA2008-047) 8.CB.10 [no] (IUPAC: tricadmium dizinc dihydroxoarsenate diarsenate tetrahydrate) External links IMA Database of Mineral Properties/ RRUFF Project Mindat.org - The Mineral Database Webmineral.com Mineralatlas.eu minerals N
47860579
https://en.wikipedia.org/wiki/Croma%20Security
Croma Security
Croma Security Solutions Group PLC (CSS Group PLC or Croma Security) ( previously Access Lock and Key ) is a holding company for a group of four security companies. It is publicly traded on the London Stock Exchange Group AIM market and is based in the Solent Business Park, Whiteley, Hampshire. It has further offices in London, Dumfries, and Abu Dhabi, with Locksmith outlets along the South Coast. The group is involved in many aspects of security from guarding to locksmithing and many types of electronic security. The group currently employs (through its subsidiaries) over 500 people, with a mixture of skill-sets including security officers and engineers. Its operations span most of the UK (with some work in the Middle East)and has a diverse client base from domestic customers to FTSE 100 companies and other 'Blue Chip' organisations. The present structure was formed by the "reverse acquisition" of the Southampton-based CSS Group of companies by Dumfries based Croma Group PLC. History County Locksmiths (Hants) Ltd and the foundations On 18 February 1970 Fernando Fiorentino founded County Locksmiths Security Ltd, after learning the trade from his father in Tripoli, Libya. The shop, based at 324 Shirley Road, Shirley, Southampton offered basic key-cutting, lock installation and changing, and emergency call-out. In 1981, age 18, Roberto Fiorentino (Fernando's first son) joined County Locksmiths where he learnt the locksmithing trade under guidance from his father, helping him grow the single shop business into three very profitable branches: Shirley, Bitterne and Chandlers Ford. The Shirley branch later moved from 324 Shirley road to number 346 in 1981, where the headquarters for the Locksmith division remained until 9 December 2015. County Access Systems and CSS Total Security 1989 saw Roberto start his own security business, County Access Systems Ltd(CAS), (what would later become County Security Systems and then CSS Total Security Ltd) – offering electronic security systems, including access control, & CCTV systems, broadening the scope of services that could be provided. Roberto still worked closely in partnership with CLS throughout this time, and even operated from the same premises. In the early 1990s CSS (County Security Systems), began exploring and implementing remote CCTV viewing over phone line connection. Roberto then purchased CLS from his father in 1997. Over the first few years of the millennium, CSS became one of the first security companies to remotely view cameras on a smart phone, over mobile data connection. During this period, the company also began using, designing, and installing biometric systems such as iris scanning and fingerprint technology. In 2003, CSS partnered up with Cieffe and began investigating and offering video analytics to clients with high-specification CCTV systems. Revenue parking entered the scene in 2007, bringing many large contracts along with it. CSS's operations extended as far afield as Scotland in 2010 with a new office opening, broadening the companies horizons, and creating greater market opportunities. CSS acquired Alarm Bell Ltd in 2011, adding Fire Alarm and safety systems to CSS's product and service portfolio, along with becoming close business partners with Scotland-based Croma Group PLC, comprising Vigilant Security Services, and Photobase Ltd. This allowed CSS to offer Close Protection, Keyholding, Manned Guarding, and further biometric security services to its customers. Photobase was responsible for creating the software / hardware system which is FastVein™. After the acquisition of Alarm Bell Ltd & Arlo Parking Ltd, CSS Total Security & County Locksmiths became known as the CSS Group. Reverse Takeover & Flotation In June 2012, Croma Group PLC and the CSS Group went through a merger and flotation on the AIM market, with the intention of becoming the UK's leading Total Security Services Provider, with former Black Watch and Special Forces Officer Sebastian Morley as Chairman, and Roberto Fiorentino in position of CEO. The companies TIDM changed from CMG.L to CSSG.L. As of the takeover, Croma Security Solutions Group PLC (CSSG) was born, offering comprehensive and turnkey security solutions to HNWIs, UHNWIs, and large corporate clients. Acquisition Of Access Key & Lock On 9 December 2015 Croma Security Solutions Group PLC formally announced its acquisition of Access Locksmiths Ltd t/a Access Key & Lock. Access Key & Lock was founded and owned by former County Locksmiths employee Mark Whettingsteel, who will now join The Board as an Executive Director, and act as Managing Director for the newly formed 'Croma Locksmiths & Security Solutions' Division. In the financial year prior to the acquisition, Access' turnover was GBP2.4 million, with gross profits of GBP300,000. All 38 staff will join the 500+ strong team already employed by the company. Subsidiaries & Portfolio ‘The Group’ now has four operating subsidiaries, most of which operate from different regions and offices due to the geographical diversity and nature of its contracts and customer portfolio. Croma Locksmiths (and Security Solutions) 'Croma Locksmiths' comprises the businesses of CSS Locksmiths Ltd, and as of 9 December 2015, Access Locksmiths Ltd. The division's services include the supply, installation, and maintenance of locks, safes, keys, and other ironmongery and door furniture. The Division operates from 7 locations along the South Coast: Bournemouth Southampton (Shirley) Southampton (Portswood) Winchester Fareham - Head Office Portsmouth (Southsea) Brighton All branches operate an emergency call-out service. Croma Biometrics Croma Biometrics is the biometric division of the group, focusing much of its efforts on FastVein™, its proprietary software/hardware vein recognition technology which was developed with Hitachi. The biometrics division comprises Photobase Ltd, and some previous CSS Total Security employees. This division is heavily involved in Research & Development. Its services include identity management, Time and attendance solutions, and Access Control. Croma Biometric's Fastvein product was used in the London 2012 Olympic Games, with Chief Superintendent, David Griffith of Dorset Police describing it as 'excellent'. It was also installed in 18 HM Prisons, for bailing and enrolling inmates, as well as providing control over visitors. Croma Biometrics announced on 2 October 2015 the official launch of its Time & Attendance offering, Fastvein™ T&A. The solution features a web based platform accessible from almost any internet enabled device and claims to 'far exceed[s] the benefits of a traditional card or token system'. On 13 & 14 December 2015, both the Sunday Times reported that CSS Group PLC has developed spin off of Fastvein™ for use in educational establishments. Predominantly, the system replaces traditional register taking activities and asks pupils to scan their fingers to demonstrate attendance. Further to this, and more specifically for Boarding Schools and Public Schools, pupils can request time to leave the school grounds via touch screen interfaces linked to the system, this is forwarded to Housemasters who may then authorise their absence or time off. If the pupils do return or scan within a certain time frame of the time they specified they would return by, teachers receive email alerts and a prompt to telephone the pupils. The system also issues a fire register when the alarm is triggered. Croma Vigilant Croma Vigilant is the groups 'manpower', investigations and Asset Protection branch, supplying Manned Guarding, Keyholding, Close protection, private investigation and covert surveillance. Croma Vigilant is a trading style of Vigilant Security Services Ltd and is run by Major Sebastian Morley, the group's executive Chairman who famously ended his military career as a Squadron Commander of 23 SAS amid claims that the military was "under resourced" and attributed the deaths of numerous servicemen to this. Given Mr Morley's background, a strong military ethos has been adopted and is emphasised throughout; close to all personnel are ex-forces, a large tranche of these are ex-Special Forces. Croma Security Systems Croma Security Systems is the electronic security division of the group, engaged in the consultation, design, supply, installation, maintenance and servicing of bespoke integrated CCTV, intruder alarm, fire alarm, access control and revenue parking systems. The division operates a rabble of engineers and a fleet of corresponding vehicles to allocate and delegate its works. Croma Security Systems comprises predominantly CSS Total Security Ltd, but includes other companies purchased by CSS Total Security Ltd prior the reverse takeover of Croma Group PLC, including Alarm Bell Ltd and Arlo Parking Ltd. Most of the divisions work takes place on the South Coast, however increasing demand for its services in London and Scotland led to the addition of engineers to the roster in these locations also. Croma Security Systems heavily supports local businesses and communities, including Hampshire County Council and Southampton City Council, and even undertook charitable work for an arson-hit nursery. Board Chairman - Sebastian Morley Chief Executive Officer - Roberto Fiorentino Financial Director - Richard Juett Executive Director - Mark Whettingsteel Executive Operations Director - Paul Williamson Non-Executive Director - Nick Hewson Non-Executive Director - Charlie McMicking References 1970 establishments in England Companies listed on the London Stock Exchange
593786
https://en.wikipedia.org/wiki/Simple%20Authentication%20and%20Security%20Layer
Simple Authentication and Security Layer
Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It decouples authentication mechanisms from application protocols, in theory allowing any authentication mechanism supported by SASL to be used in any application protocol that uses SASL. Authentication mechanisms can also support proxy authorization, a facility allowing one user to assume the identity of another. They can also provide a data security layer offering data integrity and data confidentiality services. DIGEST-MD5 provides an example of mechanisms which can provide a data-security layer. Application protocols that support SASL typically also support Transport Layer Security (TLS) to complement the services offered by SASL. John Gardiner Myers wrote the original SASL specification (RFC 2222) in 1997 while at Carnegie Mellon University. In 2006, that document was replaced by RFC 4422 authored by Alexey Melnikov and Kurt D. Zeilenga. SASL, as defined by RFC 4422 is an IETF Standard Track protocol and is, , a Proposed Standard. SASL mechanisms A SASL mechanism implements a series of challenges and responses. Defined SASL mechanisms include: , where authentication is implicit in the context (e.g., for protocols already using IPsec or TLS) , for unauthenticated guest access , a simple cleartext password mechanism, defined in RFC 4616 , a one-time password mechanism. Obsoletes the SKEY mechanism. , an S/KEY mechanism. , a simple challenge-response scheme based on HMAC-MD5. (historic), partially HTTP Digest compatible challenge-response scheme based upon MD5. DIGEST-MD5 offered a data security layer. (RFC 5802), modern challenge-response scheme based mechanism with channel binding support , an NT LAN Manager authentication mechanism family of mechanisms supports arbitrary GSS-API mechanisms in SASL. It is now standardized as RFC 5801. , for Kerberos V5 authentication via the GSSAPI. GSSAPI offers a data-security layer. , for Mozilla Persona authentication , for GSS EAP authentication GateKeeper (& GateKeeperPassport), a challenge-response mechanism developed by Microsoft for MSN Chat , OAuth 2.0 bearer tokens (RFC 6750), communicated through TLS , OAuth 1.0a message-authentication-code tokens (RFC 5849, Section 3.4.2) SASL-aware application protocols Application protocols define their representation of SASL exchanges with a profile. A protocol has a service name such as "ldap" in a registry shared with GSSAPI and Kerberos. protocols currently supporting SASL include: Application Configuration Access Protocol Advanced Message Queuing Protocol (AMQP) Blocks Extensible Exchange Protocol Internet Message Access Protocol (IMAP) Internet Message Support Protocol Internet Relay Chat (IRC) (with IRCX or the IRCv3 SASL extension) Lightweight Directory Access Protocol (LDAP) libvirt ManageSieve (RFC 5804) memcached Post Office Protocol (POP) Remote framebuffer protocol used by VNC Simple Mail Transfer Protocol (SMTP) Subversion protocol Extensible Messaging and Presence Protocol (XMPP) See also Transport Layer Security (TLS) References External links - Simple Authentication and Security Layer (SASL) - obsoletes - Anonymous Simple Authentication and Security Layer (SASL) Mechanism - obsoletes - The PLAIN Simple Authentication and Security Layer (SASL) Mechanism - updates The IETF SASL Working Group, chartered to revise existing SASL specifications, as well as to develop a family of GSSAPI mechanisms Cyrus SASL, a free and portable SASL library providing generic security for various applications GNU SASL, a free and portable SASL command-line utility and library, distributed under the GNU GPLv3 and LGPLv2.1, respectively Dovecot SASL, an SASL implementation (historic) - Using Digest Authentication as a SASL Mechanism, obsoleted in Java SASL API Programming and Deployment Guide Cryptographic protocols Internet Standards Computer access control protocols
844158
https://en.wikipedia.org/wiki/Cramfs
Cramfs
The compressed ROM/RAM file system (or cramfs) is a free (GPL'ed) read-only Linux file system designed for simplicity and space-efficiency. It is mainly used in embedded and small-footprint systems. Unlike a compressed image of a conventional file system, a cramfs image can be used as it is, i.e. without first decompressing it. For this reason, some Linux distributions use cramfs for initrd images (Debian 3.1 in particular) and installation images (SUSE Linux in particular), where there are constraints on memory and image size. In 2013, Linux maintainers indicated that cramfs was made obsolete by squashfs, but the file system got rehabilitated in 2017 for use in low-memory devices where using squashfs may not be viable. Design Files on cramfs file systems are zlib-compressed one page at a time to allow random read access. The metadata is not compressed, but is expressed in a terse representation that is more space-efficient than conventional file systems. The file system is intentionally read-only to simplify its design; random write access for compressed files is difficult to implement. cramfs ships with a utility (mkcramfs) to pack files into new cramfs images. File sizes are limited to less than 16MB. Maximum file system size is a little under 272MB. (The last file on the file system must begin before the 256MB block, but can extend past it.) See also List of file systems Comparison of file systems SquashFS, a read-only compressed file system Util-linux contains the cramfs utilities References External links Free special-purpose file systems Compression file systems Read-only file systems supported by the Linux kernel
68181960
https://en.wikipedia.org/wiki/Wolfgang%20Koch%20%28computer%20scientist%29
Wolfgang Koch (computer scientist)
Johann Wolfgang Koch (born 18 October 1962) is a German physicist and computer scientist. He teaches applied computer science at the University of Bonn, Germany, and is chief scientist of the Fraunhofer Institute for Communication, Information Processing and Ergonomics. In 2011, Koch was elected a IEEE Fellow and since 2015, he has been an IEEE Distinguished Lecturer Life and education Koch was born and brought up in Nuremberg, Bavaria, the eldest of three sons of the lawyer, insurance scientist, and manager and his wife Luise, née Köllner. He passed his Abitur at the in Aachen. At RWTH Aachen University, he studied physics and mathematics, graduating in physics (diploma in 1987). He received his doctorate (Dr. rer. nat.) in 1990 at the Institute for Theoretical Particle Physics and Cosmology of the RWTH Aachen with a dissertation in the field of ergodic theory of dynamical systems under the supervision of . From 2002 to 2010, Koch taught as a lecturer at the chair of Armin B. Cremers on a part time basis and habilitated in the field of applied computer science at the University of Bonn. At the University of Bonn Institute of Computer Science he has been teaching as an Apl. Professor since 2018, focusing on signal processing, sensor data fusion, artificial intelligence, resource management. He is also involved in . Scientific focus For many years, Koch has headed the research department Sensor Data and Information Fusion (SDF) at the Fraunhofer Institute for Communication, Information Processing and Ergonomics, a research institute of the Fraunhofer Society, the largest organization for applied research and development in Europe. He and his team work predominantly for the German Federal Ministry of Defences with security tasks, including homeland security, and corresponding industrial companies. The work covers digitization in this environment, such as Intelligence, Surveillance, and Reconnaissance (ISR), electronic & navigation warfare, sensor and platform resource management, mobile and distributed multisensor systems, and aspects of Manned-unManned Teaming (MuM-T). He is also the single point of contact of the Fraunhofer Segment for Defense and Security (VVS) to the European Future Combat Air System (FCAS). He also represents scientific interests of the German Navy within the framework of the European Defence Fund. He is one of the initiators and co-chair of the Working Group on Responsible Use of New Technologies in a Future Combat Air System (FCAS),. Koch is a member of the Board of Directors of the International Society of Information Fusion (ISIF) and supports program committees for ISIF's FUSION conference series, and was Executive Chairman of FUSION 2008 in Cologne, . Koch is involved in the IEEE Aerospace and Electronics Systems Society (AESS) within the globally operating Institute of Electrical and Electronics Engineers (IEEE) as an IEEE Fellow, IEEE Distinguished Lecturer, and Member of the Board of Governors,. In 2014, he founded the Germany Section of the IEEE AESS and has been Chapter Chair. In 2005, Koch founded the annual Sensor Data Fusion – Trends, Solutions, Applications (SDF) event series[9] as a meeting of the fusion community of Germany and neighboring countries, which he also chairs and which has been held annually since 2011 as a peer-reviewed IEEE symposium at the Universitätsclub Bonn and has been a component of the content-related international ISIF-FUSION and IEEE-MFI conferences on several occasions. Selected publications Tracking and Sensor Data Fusion. Methodological Framework and Selected Applications. Zugl. Habilitation. Springer Nature, Berlin/Heidelberg 2014, ISBN 978-3-642-39270-2 Accumulated State Densities and Their Applications in Object Tracking. Chapter 18 in: H. Fourati (Ed.). Multisensor Data Fusion: From Algorithm and Architecture Design to Applications. CRC Press 2016, . The Role of Context in Multiple Sensor Systems for Civil Security. Chapter 20 in: L. Snidaro et al. (Eds.). Context Enhanced Information Fusion. Springer Nature, Berlin/Heidelberg 2016, as Ed. with Richard Klemm, Hugh Griffiths: Novel Radar Techniques and Applications. Part II: Target Tracking and Data Fusion. SciTech Publishing, 2017, Zur Ethik der wehrtechnischen Digitalisierung. Informations- und ingenieurwissenschaftliche Aspekte [Ethical Issues in Defense Digitalization. Computer Science and Engineering Espects]. Chapter 1 in: M. Rogg et al. (Eds.). Ethische Herausforderungen digitalen Wandels in bewaffneten Konflikten [Ethical Challenges of Digital Transformation in Armed Conflict.]. German Institute for Defence and Strategic Studies (GIDS), Hamburg 2020, (E-Book) Marianisches bei Goethe. fe-Medienverlag, Kißlegg 2021, References External links Informatik 4: Wolfgang Koch Wolfgang Koch | Aerospace & Electronic Systems Society Wolfgang Koch Bio | International Society of Information Fusion 1962 births Living people 20th-century German physicists 21st-century German physicists German computer scientists University of Bonn faculty Scientists from Nuremberg Fellow Members of the IEEE RWTH Aachen University alumni
10889095
https://en.wikipedia.org/wiki/OpenMRS
OpenMRS
OpenMRS is a collaborative open-source project to develop software to support the delivery of health care in developing countries. OpenMRS is founded on the principles of openness and sharing of ideas, software and strategies for deployment and use. The system is designed to be usable in very resource poor environments and can be modified with the addition of new data items, forms and reports without programming. It is intended as a platform that many organizations can adopt and modify avoiding the need to develop a system from scratch. OpenMRS, Inc. is a registered non-profit that is the owner of all OpenMRS materials and the maintainer of the software's Public License. This entity will represent the OpenMRS project in legal and financial matters. The software is licensed under the Mozilla Public License version 2.0 with an added "Healthcare Disclaimer". It requires that recipients are entitled to freely access the source code, but allows binary distribution, modification of the code (under the same license) and bundling into larger products that are under different licenses. History OpenMRS grew out of the critical need to scale up the treatment of HIV in Africa but from the start was conceived as a general purpose electronic medical record system that could support the full range of medical treatments. The first ideas and prototype of OpenMRS were conceived by Paul Biondich and Burke Mamlin from the Regenstrief Institute, Indiana on a visit to the Academic Model Providing Access to Healthcare (AMPATH) project in Eldoret, Kenya in February 2004. Around the same time the team at Partners In Health led by Hamish Fraser and Darius Jazayeri were looking at ways to scale up the web-based medical record system developed to manage drug resistant tuberculosis in Peru, and HIV in rural Haiti. Paul, Burke and Hamish met in September 2004 at the Medinfo conference in San Francisco, and recognized they had a common approach to medical information systems and a similar philosophy for healthcare and development and OpenMRS was born. Later, Chris Seebregts of the South African Medical Research Council became the fourth founding member. In 2005, Dr. Andrew S. Kanter from Columbia University joined the team. Dr. Kanter was directing the Millennium Villages Project's (MVP) health information systems and selected OpenMRS for use in the ten MVP countries in sub-Saharan Africa. Design The OpenMRS code is based on a "concept dictionary" that describes all the data items that can be stored in the system such as clinical findings, laboratory test results or socio-economic data. This approach avoids the need to modify the database structure to add new diseases, for example, and facilitates sharing of data dictionaries between projects and sites. An important feature of OpenMRS is its modular construction which allows the programming of new functions without modifying the core code. OpenMRS is web based but can be deployed on a single laptop or on a large server and runs on Linux, Windows or Mac OS X. Other key features of OpenMRS: Built on the MySQL database (but uses Hibernate allowing it to be ported to other databases) Programmed in Java Includes tools for data export and reporting Versions currently exist for HIV/AIDS, Drug resistant TB, primary care and oncology Supports open standards for medical data exchange including HL7, FHIR, LOINC and IXF Form-based tools, such as the Form Entry module and XForms module Provides access to between-release code through "Continuous Deployment" Bidirectional synchronization with systems such as MoTeCH and TRACnet The Atlas module, which gives information on all OpenMRS facilities using a visual map Can be integrated with SMS messaging New features (OpenMRS 1.9 and later): Allows older versions to run without upgrading Tools to link to hand held devices and cell phones (using JavaRosa) Research data collection tools for clinical trials and community data collection projects New CIEL dictionary entries Patient dashboard tab-loading rendered on-demand via AJAX to decrease lag Currently being tested/developed: API support for order entry that provides support of orders within the system HL7 Fast Healthcare Interoperability Resources (FHIR) support for OpenMRS Anatomical drawing tool with pre-loads image and blank canvas options User interface improvements Ebola treatment unit electronic medical record as a response to the 2014 Ebola epidemic Message delivery triggered by a trend in data entry Deployments The first deployment was in Eldoret, Kenya in February 2006 followed by the PIH-supported hospital in Rwinkwavu, Rwanda in August 2006 and Richmond Hospital in the KwaZulu-Natal province of South Africa later that year. As of March 2010, OpenMRS is in use in at least 23 developing countries (mostly in Africa) and it has been used to record over 1 million patient records around the world. Most deployments are run by independent groups who carry out the work on the ground with technical support and training provided by the core team of OpenMRS developers, and other implementers. There have been four annual OpenMRS meetings in South Africa, organized by Chris Seebregts, who also leads the OpenMRS implementers community. Shorter meetings were held in Boston in May 2009, and a developer training in Indianapolis in February 2010. There are five known deployments supporting clinical care in the US - three in Indianapolis, one in Los Angeles, and one in Maryland. OpenMRS use will be expanded in Haiti to assist with the patients recovering from the January 2010 earthquake. In Nigeria, Institute of Human Virology is pushing for OpenMRS penetration in public and private clinics. The institute had a pilot of OpenMRS in 2011 to manage HIV/AIDs patients' records in 27 health facilities, the outcome of the pilot was overwhelming. In 2013, the institute decided to scale-up on OpenMRS and scale-down paper-based systems in all its over 400 health facilities and sub-partners' facilities. There has been tremendous progress in this scale-up. Distributions An OpenMRS Distribution is a particular configuration of the OpenMRS Platform, OpenMRS modules, content (concepts, forms, reports, etc.) and other integrated applications, that can be installed and upgraded as a unit. Distributions can be general purpose, targeted at a specific clinical or geographic use case, or specific to a single consumer. The OpenMRS Reference Application is maintained by the OpenMRS community. This distribution is an extensible facility EMR. ThoughtWorks maintains a general purpose OpenMRS distribution called Bahmni. It is an easily configurable out-of-the-box EMR and hospital information system. OpenMRS distributions targeted at specific countries are available for Mozambique (eSaude), Kenya (KenyaEMR), and Uganda (UgandaEMR). Several other distributions are available, or currently being built. Support OpenMRS is supported by core teams from Partners In Health, Regenstrief Institute, and the South African Medical Research Council. Other organizations that collaborate on OpenMRS are the Millennium Villages Project, based at Columbia University, and Baobab Health Systems in Malawi. Some institutes have extended financial and consulting support as well, including The United States Center for Disease Control, the Rockefeller Foundation, and the World Health Organization. A variety of organizations, such as Atlassian, Blueberry Software, and YourKit, have also donated licenses to OpenMRS developers. There are several groups of programmers working on OpenMRS in developing countries including Kenya, Rwanda, Uganda, South Africa, Pakistan, Chile, and India. In Rwanda, Partners In Health started local training program called E-Health Software Development and Implementation (EHSDI). The nine-month course was designed to train students in medical information systems, and it focused highly in using the OpenMRS platform. Community The OpenMRS community includes developers, implementers, and users from multiple countries who collaborate through mailing lists, IRC, and annual conferences. Each year, OpenMRS hosts their annual Implementers' Conference; a global meet up of developers and implementers hosted by different countries where OpenMRS is being used on a national scale. History of OpenMRS Conferences 2006 Cape Town South Africa - Implementer's Meeting 2007 Cape Town South Africa - Implementer's Meeting 2008 Durban South Africa - Implementer's Meeting in conjunction with HELINA 2009 Cape Town South Africa - Implementer's Meeting 2010 Cape Town South Africa - Implementer's Meeting in conjunction with MEDINFO 2010 2011 Kigali, Rwanda - Implementer's Meeting 2012 Silang, Philippines - Implementer's Meeting 2013 Eldoret, Kenya - Implementer's Meeting in conjunction with HELINA 2015 Singapore - Worldwide Summit 2015 Maputo, Mozambique - Implementer's Conference 2016 Kampala, Uganda - Implementer's Conference 2017 Lilongwe, Malawi - Implementer's Conference 2018 Nairobi Kenya - 2018 OMRS implementer's Conference in conjunction with HELINA OpenMRS has participated annually in Google Summer of Code since 2007; according to that program's manager, it receives more student applications than the Apache Software Foundation. In the summer of 2013, OpenMRS participated as a mentoring organization in the Outreach Program for Women. OpenMRS also held a three-day leadership retreat, OpenMRS Camp 2014, at Bradford Woods. The focus of the camp was to build strategies for growing the OpenMRS community and ensuring its success. OpenMRS held its first OpenMRS Code Jam on November 19, 2014 in Toronto, where it was hosted by ThoughtWorks. OpenMRS is a mentoring organization in Google Code-in 2015. See also eHealth Electronic medical record Health informatics List of open-source health software References External links Free health care software Electronic health record software Software using the Mozilla license
47866988
https://en.wikipedia.org/wiki/Airbiquity
Airbiquity
Airbiquity Inc. is a business-to-business (B2B) software development and engineering company operating in the automotive telematics industry. Airbiquity's business model is to develop, deploy, and support the ongoing management of connected car programs for automotive industry customers. Airbiquity's Choreo platform is claimed by the company to integrate the broad array of components needed to build and operate connected car programs such as mobile network connectivity; vehicle entertainment systems or "head units", cabin displays and instrumentation; telematics control units (TCU), on-board diagnostic dongles (OBD-II), smart-phone handsets; back-office IT systems; data analytics providers; software and firmware catalogs; emergency and concierge call centers; and content and service providers. History The company was founded in 1997 as Integrated Data Communications Inc. located on Bainbridge Island across the Puget Sound from Seattle. In 2000, the company changed its name to Airbiquity Inc. and later relocated to the Seattle downtown waterfront in 2006. During its early years Airbiquity's primary product was aqLink, a patented in-band software modem enabling vehicle connectivity and two-way data transfer over cellular voice networks using embedded TCUs or consumer cell phones with Bluetooth connections. The introduction of aqLink was followed by a series of product upgrades and line extensions, including aqServer for transmission, receipt and processing of information transported to data and call center providers over UMTS, CDMA, TDMA or GSM wireless network voice channels. In 2001, Airbiquity secured a contract with General Motors to license aqLink technology to support location-based communications for its OnStar service. Other automakers licensed aqLink technology for their connected car programs. According to Airbiquity, aqLink technology has been licensed for use in over 25 million production vehicles globally. The ability to provide voice and data services marked the start of branded connected car programs from automakers seeking to differentiate their vehicles from competitors. These programs have since expanded to include other features and services such as over-the-air (OTA) software and data management, infotainment delivery, remote vehicle management, electric vehicle management, and commercial fleet management. OTAmatic One of the services Airbiquity offers is called OTAmatic. It provides over-the-air (OTA) software and data management to automotive manufacturers and suppliers. In December 2018, Airbiquity announced a new version of OTAmatic. This update will include new security and data analytics features as well as new OTA management capabilities, such as the ability to carry out multi-ECU software updates and data campaigns. Prior Offerings aqLink aqServer Fleet Management Safety & Security Electric Vehicle Infotainment Delivery Choreo Airbiquity launched Choreo, a cloud-based connected car service delivery platform, in 2008. The first automaker program to deploy on Choreo was Ford Sync in 2008. Airbiquity provides service delivery for Choreo in over 60 countries, including translations into more than 30 local languages Choreo is composed of six services for established and newly emerging connected car services. Each product draws upon combinations of Choreo platform service delivery capability supplemented with custom software development and engineering to integrate with specified suppliers, mobile network operators, call centers, content providers, and back-end IT systems. Awards Airbiquity has received the following telematics industry, technology, and business awards for software development, engineering and integration expertise, and general business success: Business Intelligence Group - New Product of the Year - 2017 Stratus Award for Cloud Computing - 2017 TU-Automotive - Best Telematics Service Provider – 2015 Deloitte - Deloitte Technology Fast 500 – 2014 & 2013 Connected World - Connected World 100 – 2014 & 2013 Inc. Magazine - Inc. 500/5000 – 2014 & 2013 Edison Universe - Innovative Safety & Security Services – 2014 Intelligent Transportation Society of America - Solution Spotlight: Electric Vehicle – 2013 Red Herring - Top 100 North America – 2013 Telematics Update - Best Telematics Solution – 2012/11 Telematics Update - Best Automotive Application – 2012/11 References Software companies of the United States 1997 establishments in the United States Software companies established in 1997 Companies established in 1997