id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
4285830
https://en.wikipedia.org/wiki/Internet%20censorship%20in%20Pakistan
Internet censorship in Pakistan
Internet censorship in Pakistan is government control of information sent and received using the Internet in Pakistan. There have been significant instances of website access restriction in Pakistan, most notably when YouTube was banned from 2012–2016. Pakistan has asked a number of social media organisations to set up local offices within the country, but this is yet to happen. Pakistan made global headlines in 2010 for blocking Facebook and other Web sites in response to a contest popularized on the social networking site to draw images of the Prophet Mohammad. In general, Internet filtering in Pakistan remains both inconsistent and intermittent, with filtering primarily targeted at content deemed to be a threat to national security, pornography and at religious content considered blasphemous. In 2019, The National Assembly Standing Committee on Information Technology and Telecom was informed by Pakistan Telecommunication Authority (PTA) that 900,000 URLs were blocked in Pakistan for "reasons such as carrying blasphemous and pornographic content and/or sentiments against the state, judiciary or the armed forces." Overview In mid-2012 Pakistanis had relatively free access to a wide range of content, including most sexual, political, social, and religious sites on the Internet. The OpenNet Initiative listed Internet filtering in Pakistan as substantial in the conflict/security area, and as selective in the political, social, and Internet tools areas in August 2012. Additionally, Freedom House rated Pakistan's "Freedom on the Net Status" as "Not Free" in its Freedom on the Net 2013 report. This is still true as of 2016. Internet filtering in Pakistan is regulated by the Pakistan Telecommunications Authority (PTA) and the Federal Investigation Agency (FIA) under the direction of the government, the Supreme Court of Pakistan, and the Ministry of Information Technology (MoIT). Although the majority of filtering in Pakistan is intermittent—such as the occasional block on a major Web site like Blogspot or YouTube—the PTA continues to block sites containing content it considers to be blasphemous, anti-Islamic, or threatening to internal security. Online civil society activism that began in order to protect free expression in the country continues to expand as citizens utilize new media to disseminate information and organize. Pakistan has blocked access to websites critical of the government or the military. Blocking of websites is often carried out under the rubric of restricting access to "blasphemous" content, pornography, or religious immorality. At the end of 2011, the PTA had officially banned more than 1,000 pornographic websites in Pakistan. Pakistan Internet Exchange The Pakistan Internet Exchange (PIE), operated by the state-owned Pakistan Telecommunication Company Ltd (PTCL), was created to facilitate the exchange of Internet traffic between ISPs within and outside of Pakistan. Because the majority of Pakistan's Internet traffic is routed through the PIE (98% of Pakistani ISPs used the PIE in 2004), it provides a means to monitor and possibly block incoming and outgoing Internet traffic as the government deems fit. Internet surveillance in Pakistan is primarily conducted by the PIE under the auspices of the PTA. The PIE monitors all incoming and outgoing Internet traffic from Pakistan, as well as e-mail and keywords, and stores data for a specified amount of time. Law enforcement agencies such as the FIA can be asked by the government to conduct surveillance and monitor content. Under the Prevention of Electronic Crimes Ordinance (PECO), ISPs are required to retain traffic data for a minimum of 90 days and may also be required to collect real-time data and record information while keeping their involvement with the government confidential. The ordinance does not specify what kinds of actions constitute grounds for data collection and surveillance. Pakistan Telecommunication Company In April 2003, the PTCL announced that it would be stepping up monitoring of pornographic websites. "Anti-Islamic" and "blasphemous" sites were also monitored. In early March 2004, the Federal Investigation Agency (FIA) ordered Internet service providers (ISPs) to monitor access to all pornographic content. The ISPs, however, lacked the technical know-how, and felt that the PTCL was in a better position to carry out FIA's order. A Malaysian firm was then hired to provide a filtering system, but failed to deliver a working system. National URL filtering and blocking system In March 2012, the Pakistan government took the unusual step of touting for firms that could help build it a nationwide content-filtering service. The Pakistan Telecommunications Authority published a request for proposals for the "deployment and operation of a national level URL Filtering and Blocking System" which would operate on similar lines to China's Golden Shield, or "Great Firewall". Academic and research institutions as well as private commercial entities had until 16 March to submit their proposals, according to the request's detailed 35-point system requirements list. Key among these is the following: "Each box should be able to handle a block list of up to 50 million URLs (concurrent unidirectional filtering capacity) with processing delay of not more than 1 milliseconds". Jyllands-Posten Muhammad cartoons The Jyllands-Posten Muhammad cartoons controversy began after 12 editorial cartoons, most of which depicted the Islamic prophet Muhammad, were published in the Danish newspaper Jyllands-Posten on 30 September 2005. This led to protests across the Muslim world, some of which escalated into violence with instances of firing on crowds of protestors, resulting in more than 100 reported deaths, and included the bombing of the Danish embassy in Pakistan, setting fire to the Danish Embassies in Syria, Lebanon and Iran, storming of European buildings, and the burning of the Danish, Dutch, Norwegian, French, and German flags in Gaza City. The posting of the cartoons online added to the controversy. On 1 March 2006 the Supreme Court of Pakistan directed the government to keep tabs on Internet sites displaying the cartoons and called for an explanation from authorities as to why these sites had not been blocked earlier. On 2 March 2006, pursuant to a petition filed under Article 184(3) of the Constitution of Pakistan, the Supreme Court sitting en banc ordered the Pakistan Telecommunications Authority (PTA) and other government departments to adopt measures for blocking websites showing blasphemous content. The Court also ordered Attorney General Makhdoom Ali Khan to explore laws which would enable blocking of objectionable websites. In announcing the decision, Chief Justice Iftikhar Muhammad Chaudhry, said, "We will not accept any excuse or technical objection on this issue because it relates to the sentiments of the entire Muslim world. All authorities concerned will have to appear in the Court on the next hearing with reports of concrete measures taken to implement our order". Consequently, the government kept tabs on a number of websites hosting the cartoons deemed to be sacrilegious. This ban included all the weblogs hosted at the popular blogging service blogger.com, as some bloggers had put up copies of the cartoons – particularly many non-Pakistani blogs. A three-member bench headed by Chief Justice Chaudhry, summoned the country's Attorney General as well as senior communication ministry officials to give a report of "concrete measures for implementation of the court's order". At the hearing on 14 March 2006, the PTA informed the Supreme Court that all websites displaying the Muhammad cartoons had been blocked. The bench issued directions to the Attorney General of Pakistan, Makhdoom Ali Khan, to assist the court on how it could exercise jurisdiction to prevent the availability of blasphemous material on websites the world over. The blanket ban on the blogspot.com blogs was lifted on 2 May 2006. Shortly thereafter the blanket ban was reimposed and extended to Typepad blogs. The blanket ban on the blogspot.com blogs was later lifted again. Allegations of suppressing vote-rigging videos by the Musharraf administration were also leveled by Pakistani bloggers, newspapers, media, and Pakistani anti-Musharraf opposition parties. The ban was lifted on 26 February 2008. Social media and platform blocking YouTube was blocked in Pakistan following a decision taken by the Pakistan Telecommunication Authority on 22 February 2008 because of the number of "non-Islamic objectionable videos." One report specifically named Fitna, a controversial Dutch film, as the basis for the block. Pakistan, an Islamic republic, ordered its ISPs to block access to YouTube "for containing blasphemous web content/movies." The action effectively blocked YouTube access worldwide for several hours on 24 February. Defaming Muhammad under § 295-C of the Blasphemy law in Pakistan requires a death sentence. This followed increasing unrest in Pakistan by over the reprinting of the Jyllands-Posten Muhammad cartoons which depict satirical criticism of Islam. Router misconfiguration by one Pakistani ISP on 24 February 2008 effectively blocked YouTube access worldwide for several hours. On 26 February 2008, the ban was lifted after the website had removed the objectionable content from its servers at the demand of the Government of Pakistan. On 19 and 20 May 2010, Pakistan's Telecommunication Authority PTA imposed a ban on Wikipedia, YouTube, Flickr, and Facebook in response to a competition entitled Everybody Draw Mohammed Day on Facebook, in a bid to contain "blasphemous" material The ban imposed on Facebook was the result of a ruling by the Lahore High Court, while the ban on the other websites was imposed arbitrarily by the PTA on the grounds of "objectionable content", a different response from earlier requests, such as pages created to promote peaceful demonstrations in Pakistani cities being removed because they were "inciting violence". The ban was lifted on 27 May 2010, after the website removed the objectionable content from its servers at the Demand of the government. However, individual videos deemed offensive to Muslims that are posted on YouTube will continue to be blocked. In September 2012, the PTA blocked the video-sharing website YouTube for not removing an anti-Islamic film made in the United States, Innocence of Muslims, which mocks Mohammed. The website would remain suspended, it was stated, until the film was removed. In a related move, the PTA announced that it had blocked about 20,000 websites due to "objectionable" content. On 25 July 2013, the government announced that it is mulling over reopening YouTube during the second week of August. A special 12-member committee was working under the Minister of IT and Telecommunication, Anusha Rahman, to see if objectionable content can be removed. The Pakistan Telecommunications Authority, the telecom watchdog in the country, has already expressed its inability to filter out select content. On 21 April 2014, Pakistan's Senate Standing Committee on Human Rights requested the Federal Government remove the ban on YouTube. On 8 February 2015, the government announced that YouTube will remain blocked 'indefinitely' because no tool or solution had been found which can totally block offensive content. As of June 2015 — 1,000 days on — the ban was still in effect, and YouTube cannot be accessed from either desktop or mobile devices. The ban was lifted due to technical glitch on 6 December 2015 according to ISPs in Pakistan. As September 2016, the ban has been lifted officially, as YouTube launched a local version for Pakistan. On 25 November 2017, the NetBlocks internet shutdown observatory and Digital Rights Foundation identified mass-scale blocking of social media and content-sharing websites including YouTube, Twitter and Facebook throughout Pakistan imposed by the government in response to the violent Tehreek-e-Labaik protests. The technical investigation found that all major Pakistani fixed-line and mobile service providers were affected by the restrictions, which were lifted by the PTA the next day when protests abated following the resignation of Minister for Law and Justice Zahid Hamid. In 2019, The National Assembly Standing Committee on Information Technology and Telecom was informed by Pakistan Telecommunication Authority (PTA) that 900,000 URLs were blocked in Pakistan for "reasons such as carrying blasphemous and pornographic content and/or sentiments against the state, judiciary or the armed forces." On 9 October 2020, TikTok was banned by Pakistan Telecommunication Authority (PTA) for "immoral content" On 16 April 2021, various social media applications were banned. Ministry of Interior ordered the Pakistan Telecommunication Authority (PTA) to restricts access of Pakistani users to Twitter, Facebook, WhatsApp, YouTube, and Telegram "<ref>"Social Media blocked in Pakistan</ref>. It was issued to block these social media websites from 11:00 AM to 03:00 PM on Friday with an immediate effect. The reason to put a temporary ban on these social media platforms was not mentioned on the official notice. Later on, PTA explained the ban by putting forward the statement, "In order to maintain public order and safety, access to certain social media applications has been restricted temporarily." There was a severe condition in Pakistan due to Tehreek-e-Labbaik Pakistan anti-France protests. The condition became more intense after Pakistan announced to ban Tehreek-e-Labbaik Pakistan under Anti-Terror Law "<ref>"Pakistan to ban Tehreek-e-Labbaik </ref>. Netsweeper usage In June 2013, the Citizen Lab interdisciplinary research laboratory uncovered that Canadian internet-filtering product Netsweeper to be in use at the national level in Pakistan. The system has categorized billions of URLs and is adding 10 million new URLs every day. The lab also confirmed that ISPs in Pakistan are using methods of DNS tampering to block websites at the behest of Pakistan Telecommunication Authority. According to the report published by the lab, "Netsweeper technology is being implemented in Pakistan for purposes of political and social filtering, including websites of secessionist movements, sensitive religious topics, and independent media." Torrents ban In July 2013, Pakistani ISPs banned 6 of the top 10 public Torrent sites in Pakistan. These sites include Piratebay, Kickass torrents, Torrentz, Bitsnoop, Extra Torrent and Torrent Reactor. They also banned the similar site Mininova. However proxies for these torrent sites are still active and P2P connections are working normally. This move lead to a massive public backlash, especially from the Twitter and Facebook communities of Pakistan. In the aftermath of such critique, the IT Minister of Pakistan, Anusha Rahman, deactivated her Twitter account. 2020 rules In October 2020 Government of Pakistan issued new policy rules called Citizens Protection (Against Online Harm) Rules 2020 or the Removal and Blocking of Unlawful Content (Procedure, Oversight and Safeguards) under 2016 Prevention of Electronic Crimes Act (PECA). The government of Pakistan intends to access internet user data and control and remove objectionable content. The companies would be required to remove or block any asked content from their websites within 24 hours after being reported by Pakistani authorities, social media companies or internet service providers face may be fined of up to $3.14 million (€2.57 million) for failure to curb the sharing of content deemed to be defamatory of Islam, promoting terrorism, hate speech, pornography or any content viewed as problematic to Pakistan's national security. Rights activists complain that new rules are compromising user privacy at mercy of Pakistani establishment sans judicial oversight, likely to erode media freedom and freedom of expression further there by erode political freedoms and result in increased censorship. Since then, dating apps like Tinder are banned in Pakistan, video sharing app named TikTok faced a temporary ban til removed content; issued notices to U.S.A. based Ahmadiyya community web portal TrueIslam.com, Google and Wikipedia for returning search results displaying Ahmadiyya community and their leadership, Mirza Masroor Ahmad, 's claims of Muslimness. Blocked by Pakistan Telecommunication Authority Video games ban PUBG ban In July 2020, PTA banned the online game PlayerUnknown's Battlegrounds Many social media activists like Waqar Zaka uploaded videos on YouTube urging Pakistanis to speak up against this ban. Millions of social media users of Pakistan have flooded sites like Facebook, Twitter and have shown overwhelming support for PUBG (PlayerUnknown's Battlegrounds). In response, PTA lifted ban on the popular online game. Other notable bans Richard Dawkins's website and the Internet Movie Database (IMDb) were blocked for brief periods in 2013. Xbox Live and GameRanger were blocked accidentally on 7 February 2013 by the Pakistan Telecom Authority. Pouet, a website about demoscene was banned as of 19 June 2015. Imgur, a website about image sharing/hosting was banned in December 2015. Reddit (NSFW content only) was also banned in 2019. No reason have been given for these bans. An extreme form of word censorship is effective on all website's URL. URLs containing words like sex, porn are blocked, this includes pages on medical information sites like WebMD, MedicineNet about sexual health and couples therapy. This is similar to word censorship in effect for SMS and text messages See also Censorship in Pakistan Censorship in South Asia Constitution of Pakistan Freedom of speech in Pakistan Freedom of the press in Pakistan Information technology in Pakistan Internet in Pakistan Pornography in Pakistan References External links "Ban on the web in the national interest" (Urdu), Reba Shahid, BBC Urdu.com, 29 July 2006, (English translation) Karachi Union of Journalists, website Mass media in Pakistan Pakistani law Pakistan Pakistan Censorship in Pakistan Censorship History of mass media in Pakistan
27161403
https://en.wikipedia.org/wiki/Route%20%28command%29
Route (command)
In computing, route is a command used to view and manipulate the IP routing table in Unix-like and Microsoft Windows operating systems and also in IBM OS/2 and ReactOS. Manual manipulation of the routing table is characteristic of static routing. Implementations Unix and Unix-like In Linux distributions based on 2.2.x Linux kernels, the ifconfig and route commands are operated together to connect a computer to a network, and to define routes between computer networks. Distributions based on later kernels have deprecated ifconfig and route, replacing them with iproute2. Route for Linux was originally written by Fred N. van Kempen. Syntax The command-syntax is: route [-nNvee] [-FC] [<AF>] List kernel routing tables route [-v] [-FC] {add|del|flush} ... Modify routing table for AF. route {-h|--help} [<AF>] Detailed usage syntax for specified AF. route {-V|--version} Display version/author and exit. Example user@linux:~$ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.101.0 192.168.102.102 255.255.255.0 UG 0 0 0 eth0 192.168.102.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.103.0 192.168.102.102 255.255.255.0 UG 0 0 0 eth0 192.168.12.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 0.0.0.0 192.168.12.1 0.0.0.0 UG 0 0 0 eth0 Microsoft Windows The command is only available if the TCP/IP protocol is installed as a component in the properties of a network adapter. Syntax The command-syntax is: route [-f] [-p] [-4|-6] [Command [Destination] [mask Netmask] [Gateway] metric Metric if Interface Parameters -f: Clears the routing table -p: The route is added to the Windows Registry and is used to initialize the IP routing table whenever the TCP/IP protocol is started (only when used with the add command) Command: The command to run (add, change, delete, print) -4: Force using IPv4 -6: Force using IPv6 Destination: Network destination of the route mask Netmask: The netmask (subnet mask) associated with the network destination Gateway: The forwarding or next hop IP address over which the set of addresses defined by the network destination and subnet mask are reachable metric Metric: Integer cost metric (ranging from 1 to 9999) for the route if Interface: The index of the interface over which the destination is reachable /?: Command help The -p parameter is only supported on Windows NT 4.0, Windows 2000, Windows Millennium Edition, and Windows XP. It is not supported on Windows 95 or Windows 98. IBM OS/2 Syntax The command-syntax is: route [-nqv] [COMMAND] [[MODIFIERS] args] Parameters -n: Bypasses translating IP addresses to symbolic host names -q: Suppresses all output -v: Verbose COMMAND: The command to run (add, delete, change, get, monitor, flush) -net: <dest> is a network address -host: <dest> is host name or address (default) -netmask: the mask of the route <dest>: IP address or host name of the destination <gateway>: IP address or host name of the next-hop router ReactOS Syntax Print the route table: route print Add a route: route add <target> [mask <mask>] <gw> [metric <m>] Delete a route: route delete <target> <gw> See also Routing table Internet Protocol Suite iproute2 BusyBox Further reading References External links route - on technet.microsoft.com Internet Protocol based network software OS/2 commands Routing Unix network-related software Windows communication and services Windows administration
19627280
https://en.wikipedia.org/wiki/Tom%20Gruber
Tom Gruber
Thomas Robert Gruber (born 1959) is an American computer scientist, inventor, and entrepreneur with a focus on systems for knowledge sharing and collective intelligence. He did foundational work in ontology engineering and is well known for his definition of ontologies in the context of artificial intelligence. In 2007 Gruber co-founded Siri Inc., which created the Siri intelligent personal assistant and knowledge navigator. Siri Inc. was acquired by Apple in 2010, and Siri is now an integral part of iOS. Biography Gruber studied psychology and computer science at the Loyola University New Orleans, where he received a double major B.S. in 1981 and graduated summa cum laude. He designed and implemented a computer-assisted instruction (CAI) system for programmed-curriculum courses. It was the first of its kind at the university, and is used routinely by the Psychology department for introductory courses. In 1984 he received a M.S. in Computer and Information Science at the University of Massachusetts Amherst. For his Master's research, Gruber designed and implemented an intelligent communication prosthesis assistant, a computer system which enables people with severe physical disabilities who cannot otherwise speak to communicate in natural language presented in displayed, written, or spoken form. Four years later in 1988 at the University of Massachusetts Amherst he received a Ph.D. in Computer and Information Science with the dissertation "The Acquisition of Strategic Knowledge". His dissertation research addressed a critical problem for Artificial Intelligence—knowledge acquisition—with a computer assistant that acquires strategic knowledge from experts. From 1988 to 1994 Gruber was a Research Associate at the Knowledge Systems Laboratory of the Computer Science Department at Stanford University. He worked on the How Things Work, SHADE, and Knowledge Sharing Technology projects. In 1994 he became Senior Project Leader, Enterprise Integration Technologies and proposed and designed several projects using the Internet to create shared, virtual environments for collaborative learning and work (for ARPA, NASA, and NIST). During this time he also proposed a business plan for corporate training. In 1995, he founded and became Chief Technology Officer of Intraspect Software, an enterprise software company that did early commercial work on collaborative knowledge management. Intraspect applications help professional people collaborate in large distributed communities, continuously contributing to a collective body of knowledge. Gruber has been a member of journal editorial boards of the "Knowledge Acquisition", "IEEE Expert" and "International Journal of Human-Computer Studies". Work Gruber's research interests in the 1990s were in the field of developing intelligent networked software to support human collaboration and learning. Areas of specialty include: knowledge acquisition, knowledge representation, computer-supported collaborative work, computer-mediated communication for design, and knowledge sharing technology. In 1994 he was responsible for the creation of hypermail, an email to web gateway software that saw extensive use after a rewrite by a different programmer. In 2007 Gruber co-founded Siri Inc., which created the Siri intelligent personal assistant and knowledge navigator. Siri Inc. was acquired by Apple in 2010, and Siri is now an integral part of iOS. In 2016, Siri was added to macOS in macOS Sierra. In April 2017 Gruber spoke on the TED stage about his vision for the future of "humanistic AI" especially in regards to augmentation of human capacities such as memory. He is quoted saying, "We are in the middle of a renaissance in AI. Every time a machine gets smarter, we get smarter." Publications Gruber published several articles and some books, most notably: 1993. "Toward Principles for the Design of Ontologies Used for Knowledge Sharing". In: International Journal Human-Computer Studies. Vol 43, p. 907-928. 1993. "A Translation Approach to Portable Ontology Specifications". In: Knowledge Acquisition, 5(2):199-220, 1993 2008, Ontology. Entry in the Encyclopedia of Database Systems, Ling Liu and M. Tamer Özsu (Eds.), Springer-Verlag, to appear in 2008. See also IDEF5 Social Semantic Web Soft ontology References External links Tom Gruber Recent homepage. Tom Gruber Homepage in 1994 at Stanford 1959 births American computer scientists Living people American software engineers Technology company founders American company founders Semantic Web people
30874970
https://en.wikipedia.org/wiki/The%20Centurions%20%28TV%20series%29
The Centurions (TV series)
The Centurions is a American science fiction animated television series produced by Ruby-Spears and was animated in Japan by Nippon Sunrise's Studio 7. Comic book artists Jack Kirby and Gil Kane contributed to the design and concepts of the show while Norio Shioyama did the character designs. The series began in 1986 as a five-part mini-series and was followed with a 60-episode series. The series was story edited by Ted Pedersen and written by several authors, including prolific science fiction writers Michael Reaves, Marc Scott Zicree, Larry DiTillio and Gerry Conway. The series theme and soundtrack were composed by Udi Harpaz. There was also a line of tie-in toys by Kenner and a comic book series by DC Comics. As of 2021, Ramen Toys is doing a pre-order revival of Max, Ace and Jake. The show revolves around the conflict between Doc Terror's cyborgs and the Centurions (a combination of hard-suit and a mecha). Premise In the near future of the 21st century, the cyborg mad scientist Doc Terror seeks to conquer the Earth and turn its inhabitants into robot slaves. He is assisted by his cyborg companion Hacker and an army of robots. There were many types of cyborgs: Doom Drones Traumatizers – The most commonly seen drones are walking robots with laser blasters for arms. The toy for the Traumatizer was a Sears store exclusive. The Traumatizer Leader was colored red. Doom Drones Strafers – A flying robot armed with missiles and lasers. Doc Terror and Hacker are able to fly by swapping their purely robot half for a Strafer. Groundborgs – A land-based robot armed with lasers that moves on treads. No toy was made of Groundborgs. Cybervore Panther – A robot panther. Introduced later in the series. Could combine with the Cybervore Shark. A toy for the Cybervore Panther was designed, but never released. Cybervore Shark – A robot shark. Introduced later in the series. Could combine with the Cybervore Panther. A toy for the Cybervore Shark was designed, but never released. Later, a wheeled Drone with a large screen and cannons as well as a submarine drone were added. They are joined on many occasions, starting with the first episode, by Doc Terror's daughter Amber. At each turn, their evil plans are thwarted by the heroic Centurions. The Centurions are a team of men dressed in specially created exo-frames that allow them (upon shouting "PowerXtreme") to fuse with 'incredible' assault weapon systems, becoming what the show calls man and machine, Power Xtreme! The end result is a weapons platform somewhere between a hard-suit and a mecha. Originally, there are three Centurions but two other Centurions are later added: Original Team: Max Ray – 'Brilliant' Sea Operations Commander: The calm and collected de facto leader of the team, donning a green exo-frame suit and sporting a fine moustache. His toy file card said that for exercise he regularly swam from California to Hawaii-and back. His weapon systems are best suited for underwater missions, some of these are as follows: Cruiser – A sea assault weapon system that is used for in and out of water that includes hydro thrusters, a keel-fin radar unit and a missile launcher. Max wears this with a green helmet that matches his exo-frame. Tidal Blast – A powerful surface-sub-surface attack weapon system with two hydro-powered keel-fins used for above and below the water that has battle modes such as cruise, subsonic speed and rear attack. Its weapons include a re-pulsar lesion cannon and two rotating and firing shark missiles. Like Cruiser, Max wears this with a green helmet. Depth Charger – A deep-sea weapon system used for deep underwater missions. It is a mini sub with two pivoting pontoon thrusters and two mobile directional aqua fins that has modes of attack such as diving, full fire and deep sea. Its weapons include two rotating aqua cannons, deep-sea torpedoes and a hydromine. Sea Bat – Released in the second phase of the toys' release. Fathom Fan - Released in the second series of the toys' release. Jake Rockwell – 'Rugged' Land Operations Specialist: Donning a yellow exo-frame suit. A passionate idealist with a strong moral compass, he has a short fuse that often puts him at odds with the cocky and casual personality of Ace. His weapon systems have the most firepower and are best suited for land missions, some of these are as follows: Fireforce – A powerful land assault weapon system that includes twin laser cannons and a rotating plasma re-pulsar. Jake wears this with a yellow helmet that matches his exo-frame. Wild Weasel – A protective armour assault weapon system in the shape of a motorcycle with a head shield and protective back shell for dangerous missions such as heavy forests or rocky terrains. It has battle modes including tracking, anti-aircraft, high speed travel and land attack. Its weapons include two land lasers and a front assault pak module for storing accessories. Detonator – A heavy artillery weapon system for maximum firepower. It has many battle modes including air attack and ground assault. Its weapons include sonic ray guns and freeze ray blasters. Like Fireforce, Jake wears this with a yellow helmet. Hornet – An assault helicopter weapon system used to assist Aerial missions that has battle modes including surveillance, high speed assault and sneak attack. Its weapons include four sidewinder missiles and a rotating freeze cannon. Swingshot – Released in the second phase of the toys' release. Ace McCloud – 'Daring' Air Operations Expert: Donning a blue exo-frame suit, he is a brave but cocky womanizer who is sometimes at odds with Jake. His weapon systems are best suited for aerial missions, some of these are as follows: Skyknight – A powerful air assault weapon system that has two turbo thrusters. Its weapons include stinsel missiles, laser cannons and laser bombs. Ace wears this with a blue helmet that matches his exo-frame. Orbital Interceptor – An advanced air weapon assault system with inner atmospheric thrusters that can even be used in space. It has battle modes including cruise, pursuit and power blast. Its weapons include two particle beam deflectors and a particle beam missile. Ace wears this with a life support helmet. Skybolt – An air reinforcement weapon system that has two booster stabiliser pods, radar tracking wings and modular invertible wings with battle modes including reconnaissance, backfire and anti-attack. Its weapons include galactic missiles and two backfire missile launchers for front and rear attacks. Like Skyknight, Ace wears this with a blue helmet. Strato Strike – The toy for Strato Strike was designed, but never released. Extended Team (later additions): Rex Charger – 'Expert' Energy Programmer. Dons a red and pale green exo-frame suit. Electro Charger - Gatling Guard - John Thunder – 'Specialist' Infiltration Commander. Has a black exo-frame with exposed skin. Silent Arrow - Thunder Knife - The Centurions are based on an orbiting space station called Sky Vault where its operator, Crystal Kane, uses a transporter to send the Centurions, and the requested weapon systems, to where they are needed. Crystal is always in the company of either Jake Rockwell's dog Shadow or Lucy the orangutan, or in most cases both. Shadow is usually more involved with the Centurions' battles than Lucy and sports a harness with dual missile launchers. Crystal suggests tactics and sends equipment as required. The Centurions also have a hidden base in New York City called "Centrum". Its entrance is hidden in a book store and must be reached via an underground railcar. "Centrum" serves as the Centurions' land base of operations and also has a beaming pod for rapid transport to "Sky Vault". In addition to "Sky Vault" and "Centrum" there is also a "Centurion Academy" whose location is kept completely secret and only seen in the last 5 episodes. Much like the Super Friends' additions of Black Vulcan, Apache Chief, Samurai, and El Dorado to introduce racial diversity into the series, The Centurions saw the addition of Rex Charger, the energy expert, and John Thunder, the Apache infiltration expert. Themes As well as the adventure side of the show, the series considered various science fiction themes. In particular, the fusion or relationship between humans and technology is a focus throughout the show's run. To tie in with real world issues at the time, a number of episodes revolved around ecological themes, such as Doc Terror's attempts to extort money from the governments of the world by threatening to destroy some aspect of the environment. The subject of magic is explored in "That Old Black Magic", when Ace becomes involved with a woman named Cassandra Cross, who is a practitioner of white magic. Her evil twin sister, Lilith, joins forces with Doc Terror in "Return of Cassandra". Both characters are voiced by B. J. Ward. Ace also has other romantic interests; some of them reappear in later episodes, such as television reporter Jenny Rivers and Sealab technician Mei Lee. Ace's most elusive romantic pursuit is Crystal Kane, who later reveals she had lost a fiancé in action ("You Only Love Twice"). Ace has a romantic dream involving Crystal at the beginning of the episode "Firecracker". Crystal often resists Ace's advances either firmly or jokingly, but sometimes demonstrates she has affection and love for him. The two finally kiss at the conclusion of the two-part story "The Better Half". Doc Terror is sometimes assisted by his daughter Amber. Like the Batman character Talia al Ghul, she sometimes shows affection for an enemy of her father (Jake Rockwell) and even betrays him when he goes too far ("Let the Lightning Fall"). The series also sometimes leaves an episode's ending up to viewer interpretation. In the two-part story "The Better Half", Doc Terror and Hacker's robotic halves, Syntax and Lesion, join together after a freak accident and become "Uniborg"; they turn on the two, forcing Terror and Hacker to ally with the Centurions. Upon breaking into Dominion and defeating their robotic counterparts, Terror is poised to destroy them, when they argue that he cannot kill them because they are his creations, and an extension of himself and Hacker. Terror screams in anguish as he fires his disruptor cannon, and the episode ends with a scene on Skyvault, not revealing whether or not Terror intentionally misses or destroys Uniborg. In "To Dare Dominion", Terror unleashes a time/space vortex consumer within Dominion that plays havoc with reality. The Centurions are able to destroy it, but not before it wrecks Dominion's power generators, causing a violent explosion that seemingly kills Doc Terror and Hacker. The Centurions are beamed out before they suffer the same fate. Terror and Hacker's fate remains unresolved at the episode's conclusion, with Jake believing that they are finished, and Max arguing that they may not be. The series episodes were produced and broadcast in an anachronistic order. For example, the episodes which showcase the recruitment drive for the two extra Centurions ("Man or Machine" 5-parter) were produced long after several episodes showcasing the new additions. Cast Main voices Michael Bell – John Thunder Jennifer Darling – Amber Vince Edwards – Jake Rockwell Ron Feinberg – Doc Terror Pat Fraley – Max Ray, Dr. Wu Ed Gilbert – Hacker Diane Pershing – Crystal Kane Bob Ridgely – Rex Charger Neil Ross – Ace McCloud B. J. Ward – Cassandra Cross/Lilith Cross William Woodson – Opening Narration Additional voices Richard "Dick" Gautier – Dan Gilvezan – Tress MacNeille – Mona Marshall – Bill E. Martin – Mea Martineau – David Mendenhall – Alan Oppenheimer – Patrick Pinney – John Stephenson – Frank Welker – Keone Young – Gabriel Knight – Gibson - Neon - Episodes Five-part mini-series "The Sky Is on Fire" (1986.04.07): written by Ted Pedersen "Battle Beneath the Sea" (1986.04.08): written by Michael Reaves and Steve Perry "An Alien Affair" (1986.04.09): written by Ted Pedersen "Found: One Lost World" (1986.04.10): written by Don Glut "Sand Doom" (1986.04.11): written by Ted Pedersen Season One "Whalesong" (1986.09.22): written by Gerry Conway and Carla Conway "Tornado of Terror" (1986.09.23): written by Gerry Conway and Carla Conway "Denver Is Down" (1986.09.24): written by Matt Uitz "Micro Menace" (1986.09.25): written by Jack Bornoff "Attack of the Plant-Borg" (1986.09.26): written by Don Goodman "Battle Beneath the Ice" (1986.09.29): written by Michael Reaves and Steve Perry "Operation Starfall" (1986.09.30): written by Michael Reaves "Let the Games Begin" (1986.10.01): written by Marc Scott Zicree "Firebird" (1986.10.02): written by Matt Uitz "Cold Calculations" (1986.10.03): written by Kayte Kuch "Return of Captain Steele" (1986.10.06): written by Michael Reaves "Three Strikes and You're Dead" (1986.10.07): written by Larry DiTillio "Double Agent" (1986.10.08): written by Creighton Barnes "Child's Play" (1986.10.09): written by Kayte Kuch "Terror on Ice" (1986.10.10): written by Herb Engelhardt "That Old Black Magic" (1986.10.13): written by Gerry Conway and Carla Conway "Max Ray...Traitor" (1986.10.14): written by Marc Scott Zicree "Crack the World" (1986.10.15): written by Gerry Conway and Carla Conway "The Incredible Shrinking Centurions" (1986.10.16): written by Michael Cassutt and Mark Cassutt "Live at Five" (1986.10.17): written by Michael Charles Hill "The Mummy's Curse" (1986.10.20): written by Gerry Conway and Carla Conway "Counterclock Crisis" (1986.10.21): written by Gerry Conway and Carla Conway "Zombie Master" (1986.10.22): written by Creighton Barnes "Malfunction" (1986.10.23): written by Creighton Barnes and Matt Uitz "Broken Beams" (1986.10.24): written by Herb Engelhardt "The Chameleon's Sting" (1986.10.27): written by Kayte Kuch "Film at Eleven" (1986.10.28): written by Michael Charles Hill "Hacker Must Be Destroyed" (1986.10.29): written by Don Glut "Showdown at Skystalk" (1986.10.30): written by Michael Reaves "The Warrior" (1986.10.31): written by Mark Edens "Return of Cassandra" (1986.11.03): written by Gerry Conway and Carla Conway "Night on Terror Mountain" (1986.11.04): written by Mel Gilden "Merlin" (1986.11.05): written by Kip Gordy "The Monsters from Below" (1986.11.06): written by Gary Greenfield "The Road Devils" (1986.11.07): written by Gary Greenfield "Zone Dancer" (1986.11.10): written by Michael Reaves "Firecracker" (1986.11.11): written by Larry DiTillio "Traitors Three" (1986.11.12): written by Barbara Hambly "You Only Love Twice" (1986.11.13): written by Martha Humphreys "Sungrazer" (1986.11.14): written by Michael Reaves "Novice" (1986.11.17): written by Herb Engelhardt "Breakout" (1986.11.18): written by Antoni Zalewski "Atlantis Adventure, Part I" (1986.11.19): written by Steve Perry and Ted Pedersen "Atlantis Adventure, Part II" (1986.11.20): written by Steve Perry and Ted Pedersen "Ghost Warrior" (1986.11.21): written by Gerry Conway and Carla Conway "Let the Lightning Fall" (1986.11.24): written by Gerry Conway and Carla Conway "Cyborg Centurion" (1986.11.25): written by Michael Charles Hill "Day of the Animals" (1986.11.26): written by Steve Perry "To Dare Dominion, Part I" (1986.11.27): written by Michael Reaves and Steve Perry "To Dare Dominion, Part II" (1986.11.28): written by Michael Reaves and Steve Perry "Hole in the Ocean, Part I" (1986.12.01): written by Herb Engelhardt "Hole in the Ocean, Part II" (1986.12.02): written by Herb Engelhardt "The Better Half, Part I" (1986.12.03): written by Larry Huber "The Better Half, Part II" (1986.12.04): written by Larry Huber "Revenge" (1986.12.05): written by Kayte Kuch "Man or Machine, Part 1" (1986.12.08): written by Ted Pedersen and Steve Perry "Man or Machine, Part 2" (1986.12.09): written by Ted Pedersen and Steve Perry "Man or Machine, Part 3" (1986.12.10): written by Ted Pedersen and Steve Perry "Man or Machine, Part 4" (1986.12.11): written by Ted Pedersen and Steve Perry "Man or Machine, Part 5" (1986.12.12): written by Ted Pedersen and Steve Perry Home Media releases On December 6, 2011, Warner Home Video released The Centurions: The Original Miniseries on DVD in Region 1 via their Warner Archive Collection. This is a Manufacture-on-Demand (MOD) release, available exclusively through Warner's online store and only in the US. On July 21, 2015, Warner Archive released The Centurions: Part One on DVD in Region 1 which contains the first thirty episodes of the regular series. On March 15, 2016, Warner Archive released The Centurions: Part Two on DVD, which contains the remaining thirty episodes of the series. References External links Virtual Toychest: Centurions – has pictures of most of the toys "Nostalgia tops favourite toy list" article at bbc.co.uk 1980s American animated television series 1980s toys 1986 American television series debuts 1987 American television series endings Action figures American children's animated action television series American children's animated adventure television series American children's animated science fantasy television series Comics based on toys DC Comics titles First-run syndicated television programs in the United States Ruby-Spears superheroes DC Comics superheroes Television shows adapted into comics Television series by Ruby-Spears Television series set in the 21st century Cyberpunk television series
57312386
https://en.wikipedia.org/wiki/National%20Portal%20and%20Digital%20Repository%20for%20Museums%20of%20India
National Portal and Digital Repository for Museums of India
C-DAC’s initiation into virtual museums coincided with many similar international developments such as the first public version of Dspace, which was released in 2002. The initial draft of Open Archival Information System (OAIS) was also released in year 2003. It is necessary to transform museums for greater relevance and application for the modern society. Therefore, while focusing the needs of Indian museums, Dr. Dinesh Katre, Senior Director at C-DAC initiated the development of e-curator software named as JATAN (जतन): Virtual Museum Builder in 2001, which was developed and released in 2004. Subsequently, JATAN (जतन) software was deployed in Chhatrapati Shivaji Maharaj Museum, Mumbai; Raja Dinkar Kelkar Museum, Pune and The Baroda Museum & Picture Gallery in Vadodara. Although the response from museums was lukewarm, C-DAC continued developing JATAN (जतन) software into a comprehensive digital collection management system for museums. As part of this research, early visions of using crowdsourcing method for metadata enrichment of museum artefacts and unified virtual catalogue for Indian museums was presented in 2005. During 2013, Ministry of Culture started the Vivekananda Memorial Museum Excellence Program in collaboration with The Chicago Institute of Art, USA. As part of this program various existing software solutions available in India were evaluated and finally JATAN: Virtual Museum Builder was selected for standardized implementation across national museums. Certification Program for Museum Curators The team of Human-Centred Design & Computing Group at C-DAC, Pune organized JATAN certification training program in order to motivate and prepare the museum curators in taking on with the challenging task of digitization. Several batches of this 2 days training program for the museum curators were conducted. The training covered digitization best practices, hands on experience of JATAN software and exposure to advancements in digital heritage preservation, 3D Virtual walk-throughs, mobile apps, touch screen kiosk applications for museum visitors. Around 50 curators from 10 national museums were trained as part of this initiative. Standardized Implementation of JATAN (जतन) in National Museums In the first phase this project, Ministry of Culture decided to implement JATAN (जतन) software in the following 10 national museums - National Museum, New Delhi Allahabad Museum, Allahabad, Uttar Pradesh Indian Museum, Kolkata, West Bengal National Gallery of Modern Arts (NGMA), New Delhi National Gallery of Modern Arts (NGMA), Mumbai, Maharashtra National Gallery of Modern Arts (NGMA), Bengaluru, Karnataka Archaeological Survey of India (ASI) Museum, Goa Archaeological Survey of India (ASI) Museum, Nagarjunakonda, Andhra Pradesh Salar Jung Museum, Hyderabad, Andhra Pradesh Victoria Memorial Hall, Kolkata, West Bengal As per the technical specifications provided by Human-Centred Design & Computing Group at C-DAC, Pune the museums procured and established computer labs equipped with computers, server, digital camera, scanner, Network Attached Storage (NAS), Local Area Network (LAN) and broadband connectivity, etc. The museums also arranged local technical support for maintenance of this facility. Major achievement of this effort was that museums became self-reliant and started with the digitization at their own. After the successful implementation of JATAN software in 10 national museums, C-DAC was entrusted with a new project to develop the National Portal and Digital Repository for Museums of India. National Portal and Digital Repository for Museums of India Ministry of Culture funded and entrusted this project to C-DAC, Pune with the objective of providing online and integrated access to digitized collections of 10 national museums by establishing the National Portal and Digital Repository for Museums of India. Human-Centred Design and Computing Group at C-DAC Pune designed and developed the portal as well as the software tools required for cataloging and digital repository management. The main features of the national portal are as under- The portal introduces India's rich heritage through a diverse range of digital collections in terms of sculptures, paintings, manuscripts, arms and weapons, terracotta, pottery, coins and a numerous other category of antiquities. Search and retrieval across all museums and categories Automatic keyword suggestions Parameter based filtering of search results High resolution images with zoom-in/out and metadata Direct access to digital collections through categorization Virtual exhibitions with scholarly essays published by museum curators Automatic timeline mapping of objects Dedicated homepages for museums to managed by nodal officers  Museum wise access to digital collections Museum wise event announcements Portal administration Scalable framework of virtual museums   The portal was inaugurated by honorable Minister for Ministry of Culture, Government of India, Shripad Naik on 21 October 2014. Presently, the national portal is providing online access to over 1,15,000 historical antiquities with approximately 5,00,000 digital images with metadata produced by 10 national museums. Data packaging and transfer Digitization and metadata integration is undertaken locally in each museum. Transfer and integration of museum data in centralized database was the major challenges before us as museums are still to be connected over National Knowledge Network (NKN). Therefore, JATAN software has been supported with the functionality to package and transfer the digitized contents to C-DAC, Pune. Process for acquisition and ingesting of museums data e-SANGRAHAN (ई-संग्रहण): E-acquisition tool It is a digital data acquisition tool which receives the packaged data sent by the museums and makes to available for ingesting in Sanskriti Digitalaya system. DIGITĀLAYA (डिजिटालय): e-Library and Archival System It is an e-Library and Archival System primarily focusing on the long term archival and repository development of digitized contents in image, audio and video formats. This system is an adaptation of CCSDS Open Archival Information System (OAIS) Reference Model, ISO 14721: 2012 in cultural heritage domain. It supports the domain specific metadata standards like MARC21, Dublin Core, MODS, METS, etc. The system allows to configure archival strategies and dissemination package for the public access portal. Human-Centred Design & Computing Group at C-DAC, Pune has designed technical quality assurance procedures for data acquisition, archival and dissemination. The correctness of information is to be ensured by domain experts in museums only. Unique advantages The National Portal and Digital Repository for Museums of India has dissolved the physical walls and removed the geographical distances between 10 national museums by merging them into a single Virtual Museum which can be accessed online through single window. Students and teachers can access the museums directly from home or classrooms. Researchers, scholars,  historians and tourists can access the antiquities from anywhere. The project has created a comprehensive national database / register of museum antiquities which is necessary for management, protection and conservation of historical antiquities. Recognition National Portal and Digital Repository for Museums of India won the Grand Jury Special Mentions Manthan Award in 2015 in e-Culture and Heritage Category. Hon'ble Minister for Culture, Dr. Mahesh Sharma has appraised Rajyasabha on the progress of JATAN Project in an official press release dated 4 April 2018. Another press release on capacity building to improve standards in museums also mentions about the progress on museums of India in collaboration with C-DAC, Pune. See also Virtual Museum National Digital Preservation Program Heritage Science Digitization Digital Preservation References Virtual museums Indian digital libraries Ministry of Culture (India) Museums in India National museums of India Cultural promotion organizations Indian educational websites Indian online encyclopedias
4624167
https://en.wikipedia.org/wiki/Virtual%20console
Virtual console
A virtual console (VC) – also known as a virtual terminal (VT) – is a conceptual combination of the keyboard and display for a computer user interface. It is a feature of some Unix-like operating systems such as Linux, BSD, illumos, UnixWare, and macOS in which the system console of the computer can be used to switch between multiple virtual consoles to access unrelated user interfaces. Virtual consoles date back at least to Xenix and Concurrent CP/M in the 1980s. In the Linux console and other platforms, usually the first six virtual consoles provide a text terminal with a login prompt to a Unix shell. The graphical X Window System starts in the seventh virtual console. In Linux, the user switches between them by pressing the Alt key combined with a function key – for example + to access the virtual console number 1. + changes to the previous virtual console and + to the next virtual console. To switch from the X Window System or a Wayland compositor, + + works. (Note that users can redefine these default key combinations.) If several sessions of the X Window System are required to run in parallel, such as in the case of fast user switching or when debugging X programs on a separate X server, each X session usually runs in a separate virtual console. Implementation details Unix systems Unix workstations, such as those manufactured by Sun or Silicon Graphics, did not include virtual consoles. The only purpose of a console would be to fix the system so that the graphical environment could start. Sun Niagara-based servers running virtualization with Logical Domains get virtual console services from the Control domain. See also Virtual desktop, works similar to a virtual console, but operates on graphical desktops instead of a command prompt System console for the non-virtual console Text terminal for the textual interface in general Pseudo terminal for even more virtual consoles Terminal emulator for an application program that has the same function as a textual virtual console Notes References FreeBSD Handbook, chapter 3.2 Virtual Consoles and Terminals External links The Linux keyboard and console HOWTO Linux command chvt to switch vt from cmdline XENIX -- Microsoft's Short-lived Love Affair with Unix Computer terminals Terminal multiplexers User interfaces
22440680
https://en.wikipedia.org/wiki/SDL%20Passolo
SDL Passolo
SDL Passolo is a specialised visual software localization tool developed to enable the translation of user interfaces. History The company PASS Engineering GmbH was founded as a medical analysis system provider in Bonn, Germany, in 1990 and it proved crucial to provide localized medical software. Due to the lack of appropriate tools for doing this, they began to develop their own localization tool which they called Passolo. After first being used internally within their own company, Passolo became available as an independent localization tool in 1998. In June, 2007, SDL acquired PASS Engineering. In 2012 the original development team was released and development and support were transferred to Cluj, Romania. About SDL Passolo SDL Passolo is a software localization tool, developed by SDL plc, that is customisable to the users' needs; requiring no programming experience. It allows users to concentrate on the translation by accelerating the many technical aspects of software localization, and it is possible to work in a WYSIWYG (What You See Is What You Get) mode. Should the length of a text string change as a result of the translation (i.e. become longer or shorter), any necessary layout modifications to dialogs and forms can be made directly within the application. The application offers the tools required to localize all the elements contained in the software, including strings, menus, dialogs, bitmaps, and icons, without requiring access to the source text files or the development environment used for developing the software, as well as providing automatic test functions, interfaces to the major translation memories, and the option of working with all the major Windows formats as well as text files and tagged formats such as XML and HTML. Purpose SDL Passolo allows the user to: Use the visual localization environment to see which translation is best when no context is available View the translation of dialog boxes and menus in real-time (WYSIWYG) and adjust them accordingly Fully customise SDL Passolo to meet their needs with an integrated development environment Integrate SDL Passolo into their workflow management systems or development environment Editions SDL Passolo Translator Edition The Translator Edition is a free editor that can be downloaded from the company website. It allows translators to edit the bundles created with the Team or Collaboration Edition. It offers all functions except parse source files or generate target files. SDL Passolo Essential SDL Passolo Essential is included as an application within the latest version of SDL Trados. This edition allows users to create and translate projects, as well as generate localized target files. Professional Edition The Professional Edition is a stand-alone solution which is particularly suited for localization projects of a medium to large size. It supports operations covering compound modules. As the edition is integrated with SDL Trados and SDL MultiTerm, translation data can be exported in order to use while translating relevant manuals and online help. It can also be used for data exchange with other systems. The integrated script development environment also makes it possible to change or add functions to SDL Passolo. Team Edition The Team Edition offers the same functions as the Professional. In addition, it can be used to create and administer a certain number of licensed translation bundles. These translation bundles can be processed using the free Translator Edition. The Team Edition has three different options, depending on the number of translation bundles used in a particular project, with the option of 5, 10 or unlimited bundles. Collaboration Edition The Collaboration edition offers the same functions as the Team Edition. In addition it can create translation bundles that can be synchronized with the original project via a network folder or FTP server. When the "Synchronize Exports" function is called all source file changes are sent to the translation bundle and all current translations are sent back to the project. There is no need to import and re-export the bundles to have the project and the bundle up-to-date. System requirements SDL Passolo runs on Windows Vista, Windows 7 or Windows 10. It is also able to localize software for Windows 9x. References External links http://www.translationzone.com/passolo - SDL Passolo http://www.sdl.com/solution/language/software-localization/passolo - SDL Passolo https://community.sdl.com/solutions/language/translationproductivity/sdl_passolo/ - SDL Community User interfaces Software-localization tools
78267
https://en.wikipedia.org/wiki/Systems%20Network%20Architecture
Systems Network Architecture
Systems Network Architecture (SNA) is IBM's proprietary networking architecture, created in 1974. It is a complete protocol stack for interconnecting computers and their resources. SNA describes formats and protocols and is, in itself, not a piece of software. The implementation of SNA takes the form of various communications packages, most notably Virtual Telecommunications Access Method (VTAM), the mainframe software package for SNA communications. History SNA was made public as part of IBM's "Advanced Function for Communications" announcement in September, 1974, which included the implementation of the SNA/SDLC (Synchronous Data Link Control) protocols on new communications products: IBM 3767 communication terminal (printer) IBM 3770 data communication system They were supported by IBM 3704/3705 communication controllers and their Network Control Program (NCP), and by System/370 and their VTAM and other software such as CICS and IMS. This announcement was followed by another announcement in July, 1975, which introduced the IBM 3760 data entry station, the IBM 3790 communication system, and the new models of the IBM 3270 display system. SNA was mainly designed by the IBM Systems Development Division laboratory in Research Triangle Park, North Carolina, USA, helped by other laboratories that implemented SNA/SDLC. The details were later made public by IBM's System Reference Library manuals and IBM Systems Journal. SNA is still used extensively in banks and other financial transaction networks, as well as in many government agencies. While IBM is still providing support for SNA, one of the primary pieces of hardware, the 3745/3746 communications controller, has been withdrawn from the market by IBM. There are an estimated 20,000 of these controllers installed however, and IBM continues to provide hardware maintenance service and microcode features to support users. A robust market of smaller companies continues to provide the 3745/3746, features, parts and service. VTAM is also supported by IBM, as is the IBM Network Control Program (NCP) required by the 3745/3746 controllers. In 2008 an IBM publication said: with the popularity and growth of TCP/IP, SNA is changing from being a true network architecture to being what could be termed an "application and application access architecture." In other words, there are many applications that still need to communicate in SNA, but the required SNA protocols are carried over the network by IP. Objectives of SNA IBM in the mid-1970s saw itself mainly as a hardware vendor and hence all its innovations in that period aimed to increase hardware sales. SNA's objective was to reduce the costs of operating large numbers of terminals and thus induce customers to develop or expand interactive terminal-based systems as opposed to batch systems. An expansion of interactive terminal-based systems would increase sales of terminals and more importantly of mainframe computers and peripherals - partly because of the simple increase in the volume of work done by the systems and partly because interactive processing requires more computing power per transaction than batch processing. Hence SNA aimed to reduce the main non-computer costs and other difficulties in operating large networks using earlier communications protocols. The difficulties included: Often a communications line could not be shared by terminals of different types, as they used different "dialects" of the existing communications protocols. Up to the early 1970s, computer components were so expensive and bulky that it was not feasible to include all-purpose communications interface cards in terminals. Every type of terminal had a hard-wired communications card which supported only the operation of one type of terminal without compatibility with other types of terminals on the same line. The protocols which the primitive communications cards could handle were not efficient. Each communications line used more time transmitting data than modern lines do. Telecommunications lines at the time were of much lower quality. For example, it was almost impossible to run a dial-up line at more than 19,200 bits per second because of the overwhelming error rate, as comparing with 56,000 bits per second today on dial-up lines; and in the early 1970s few leased lines were run at more than 2400 bits per second (these low speeds are a consequence of Shannon's Law in a relatively low-technology environment). As a result, running a large number of terminals required a lot more communications lines than the number required today, especially if different types of terminals needed to be supported, or the users wanted to use different types of applications (.e.g. under CICS or TSO) from the same location. In purely financial terms SNA's objectives were to increase customers' spending on terminal-based systems and at the same time to increase IBM's share of that spending, mainly at the expense of the telecommunications companies. SNA also aimed to overcome a limitation of the architecture which IBM's System/370 mainframes inherited from System/360. Each CPU could connect to at most 16 I/O channels and each channel could handle up to 256 peripherals - i.e. there was a maximum of 4096 peripherals per CPU. At the time when SNA was designed, each communications line counted as a peripheral. Thus the number of terminals with which powerful mainframes could otherwise communicate was limited. Principal components and technologies Improvements in computer component technology made it feasible to build terminals that included more powerful communications cards which could operate a single standard communications protocol rather than a very stripped-down protocol which suited only a specific type of terminal. As a result, several multi-layer communications protocols were proposed in the 1970s, of which IBM's SNA and ITU-T's X.25 became dominant later. The most important elements of SNA include: IBM Network Control Program (NCP) is a communications program running on the 3705 and subsequent 37xx communications processors that, among other things, implements the packet switching protocol defined by SNA. The protocol performed two main functions: It is a packet forwarding protocol, acting like modern switch - forwarding data packages to the next node, which might be a mainframe, a terminal or another 3705. The communications processors supported only hierarchical networks with a mainframe at the center, unlike modern routers which support peer-to-peer networks in which a machine at the end of the line can be both a client and a server at the same time. It is a multiplexer that connected multiple terminals into one communication line to the CPU, thus relieved the constraints on the maximum number of communication lines per CPU. A 3705 could support a larger number of lines (352 initially) but only counted as one peripheral by the CPUs and channels. Since the launch of SNA IBM has introduced improved communications processors, of which the latest is the 3745. Synchronous Data Link Control (SDLC), a protocol which greatly improved the efficiency of data transfer over a single link: It is a Sliding window protocol, which enables terminals and 3705 communications processors to send frames of data one after the other without waiting for an acknowledgement of the previous frame - the communications cards had sufficient memory and processing capacity to remember the last 7 frames sent or received, request re-transmission of only those frames which contained errors, and slot the re-transmitted frames into the right place in the sequence before forwarding them to the next stage. These frames all had the same type of envelope (frame header and trailer) which contained enough information for data packages from different types of terminal to be sent along the same communications line, leaving the mainframe to deal with any differences in the formatting of the content or in the rules governing dialogs with different types of terminal. Remote terminals (e.g., those connected to the mainframe by telephone lines) and 3705 communications processors would have SDLC-capable communications cards. This is the precursor of the packet communication that eventually evolved into today's TCP/IP technology. SDLC itself evolved into HDLC, one of the base technologies for dedicated telecommunication circuits. VTAM, a software package to provide log-in, session keeping and routing services within the mainframe. A terminal user would log-in via VTAM to a specific application or application environment (e.g. CICS, IMS, DB2 or TSO/ISPF). A VTAM device would then route data from that terminal to the appropriate application or application environment until the user logged out and possibly logged into another application. The original versions of IBM hardware could only keep one session per terminal. In the 1980s further software (mainly from third-party vendors) made it possible for a terminal to have simultaneous sessions with different applications or application environments. Advantages and disadvantages SNA removed link control from the application program and placed it in the NCP. This had the following advantages and disadvantages: Advantages Localization of problems in the telecommunications network was easier because a relatively small amount of software actually dealt with communication links. There was a single error reporting system. Adding communication capability to an application program was much easier because the formidable area of link control software that typically requires interrupt processors and software timers was relegated to system software and NCP. With the advent of Advanced Peer-to-Peer Networking (APPN), routing functionality was the responsibility of the computer as opposed to the router (as with TCP/IP networks). Each computer maintained a list of Nodes that defined the forwarding mechanisms. A centralized node type known as a Network Node maintained Global tables of all other node types. APPN stopped the need to maintain Advanced Program-to-Program Communication (APPC) routing tables that explicitly defined endpoint to endpoint connectivity. APPN sessions would route to endpoints through other allowed node types until it found the destination. This is similar to the way that routers for the Internet Protocol and the Netware Internetwork Packet Exchange protocol function. (APPN is also sometimes referred to PU2.1 or Physical Unit 2.1. APPC, also sometime referred to LU6.2 or Logical Unit 6.2, was the only protocol defined to APPN networks, but was originally one of many protocols supported by VTAM/NCP, along with LU0, LU1, LU2 (3270 Terminal), and LU3. APPC was primarily used between CICS environments, as well as database services, because it contact protocols for 2-phase commit processing). Physical Units were PU5 (VTAM), PU4 (37xx), PU2 (Cluster Controller). A PU5 was the most capable and considered the primary on all communication. Other PU devices requested a connection from the PU5 and the PU5 could establish the connection or not. The other PU types could only be secondary to the PU5. A PU2.1 added the ability to a PU2.1 to connect to another PU2.1 in a peer-to-peer environment.) Disadvantages Connection to non-SNA networks was difficult. An application which needed access to some communication scheme, which was not supported in the current version of SNA, faced obstacles. Before IBM included X.25 support (NPSI) in SNA, connecting to an X.25 network would have been awkward. Conversion between X.25 and SNA protocols could have been provided either by NCP software modifications or by an external protocol converter. A sheaf of alternate pathways between every pair of nodes in a network had to be predesigned and stored centrally. Choice among these pathways by SNA was rigid and did not take advantage of current link loads for optimum speed. SNA network installation and maintenance are complicated and SNA network products are (or were) expensive. Attempts to reduce SNA network complexity by adding IBM Advanced Peer-to-Peer Networking functionality were not really successful, if only because the migration from traditional SNA to SNA/APPN was very complex, without providing much additional value, at least initially. SNA software licences (VTAM) cost as much as $10,000 a month for high-end systems. And SNA IBM 3745 Communications Controllers typically cost over $100K. TCP/IP was still seen as unfit for commercial applications e.g. in the finance industry until the late 1980s, but rapidly took over in the 1990s due to its peer-to-peer networking and packet communication technology. The design of SNA was in the era when the concept of layered communication was not fully adopted by the computer industry. Applications, databases and communication functions were mingled into the same protocol or product, which made it difficult to maintain and manage. That was very common for the products created in that time. Even after TCP/IP was fully developed, the X Window System was designed with the same model where communication protocols were embedded into graphic display application. SNA's connection based architecture invoked huge state machine logic to keep track of everything. APPN added a new dimension to state logic with its concept of differing node types. While it was solid when everything was running correctly, there was still a need for manual intervention. Simple things like watching the Control Point sessions had to be done manually. APPN wasn't without issues; in the early days many shops abandoned it due to issues found in APPN support. Over time, however, many of the issues were worked out but not before TCP/IP became increasingly popular in the early 1990s, which marked the beginning of the end for SNA. Security SNA at its core was designed with the ability to wrap different layers of connections with a blanket of security. To communicate within an SNA environment you would first have to connect to a node and establish and maintain a link connection into the network. You then have to negotiate a proper session and then handle the flows within the session itself. At each level there are different security controls that can govern the connections and protect the session information. Network Addressable Units Network Addressable Units in a SNA network are any components that can be assigned an address and can send and receive information. They are distinguished further as follows: a System Services Control Point (SSCP) provides resource management and other session services (such as directory services) for users in a subarea network; a Physical Unit is a combination of hardware and software components that control the links to other nodes. a Logical Unit acts as the intermediary between the user and the network. Logical Unit (LU) SNA essentially offers transparent communication: equipment specifics that do not impose any constraints onto LU-LU communication. But eventually it serves a purpose to make a distinction between LU types, as the application must take the functionality of the terminal equipment into account (e.g. screen sizes and layout). Within SNA there are three types of data stream to connect local display terminals and printers; there is SNA Character String (SCS), used for LU1 terminals and for logging on to an SNA network with Unformatted System Services (USS), there is the 3270 data stream mainly used by mainframes such as the System/370 and successors, including the zSeries family, and the 5250 data stream mainly used by minicomputers/servers such as the System/34, System/36, System/38, and AS/400 and its successors, including System i and IBM Power Systems running IBM i. SNA defines several kinds of devices, called Logical Unit types: LU0 provides for undefined devices, or build your own protocol. This is also used for non-SNA 3270 devices supported by TCAM or VTAM. LU1 devices are printers or combinations of keyboards and printers. LU2 devices are IBM 3270 display terminals. LU3 devices are printers using 3270 protocols. LU4 devices are batch terminals. LU5 has never been defined. LU6 provides for protocols between two applications. LU7 provides for sessions with IBM 5250 terminals. The primary ones in use are LU1, LU2, and LU6.2 (an advanced protocol for application to application conversations). Physical Unit (PU) PU1 nodes are terminal controllers such as IBM 6670 or IBM 3767 PU2 nodes are cluster controllers running configuration support programs such as IBM 3174, IBM 3274, or the IBM 4701 or IBM 4702 Branch Controller PU2.1 nodes are peer-to-peer (APPN) nodes PU3 was never defined PU4 nodes are front-end processors running the Network Control Program (NCP) such as the IBM 37xx series PU5 nodes are host computer systems The term 37xx refers to IBM's family of SNA communications controllers. The 3745 supports up to eight high-speed T1 circuits, the 3725 is a large-scale node and front-end processor for a host, and the 3720 is a remote node that functions as a concentrator and router. SNA over Token-Ring VTAM/NCP PU4 nodes attached to IBM Token Ring networks can share the same Local Area Network infrastructure with workstations and servers. NCP encapsulates SNA packets into Token-Ring frames, allowing sessions to flow over a Token-Ring network. The actual encapsulation and decapsulation takes place in the 3745. SNA over IP As mainframe-based entities looked for alternatives to their 37XX-based networks, IBM partnered with Cisco in the mid-1990s and together they developed Data Link Switching, or DLSw. DLSw encapsulates SNA packets into IP datagrams, allowing sessions to flow over an IP network. The actual encapsulation and decapsulation takes place in Cisco routers at each end of a DLSw peer connection. At the local, or mainframe site, the router uses Token Ring topology to connect natively to VTAM. At the remote (user) end of the connection, a PU type 2 emulator (such as an SNA gateway server) connects to the peer router via the router's LAN interface. End user terminals are typically PCs with 3270 emulation software that is defined to the SNA gateway. The VTAM/NCP PU type 2 definition becomes a Switched Major Node that can be local to VTAM (without an NCP), and a "Line" connection can be defined using various possible solutions (such as a Token Ring interface on the 3745, a 3172 Lan Channel Station, or a Cisco ESCON-compatible Channel Interface Processor). See also Network Data Mover Protocol Wars TN3270 TN5250 Notes References External links Cisco article on SNA APPN Implementers Workshop Architecture Document repository SNA protocols quite technical Related whitepapers sdsusa.com Systems Network Architecture Network protocols Systems Network Architecture
2161
https://en.wikipedia.org/wiki/Artistic%20License
Artistic License
The Artistic License is a software license used for certain free and open-source software packages, most notably the standard implementation of the Perl programming language and most CPAN modules, which are dual-licensed under the Artistic License and the GNU General Public License (GPL). History Artistic License 1.0 The original Artistic License was written by Larry Wall. The name of the license is a reference to the concept of artistic license. Whether or not the original Artistic License is a free software license is largely unsettled. The Free Software Foundation explicitly called the original Artistic License a non-free license, criticizing it as being "too vague; some passages are too clever for their own good, and their meaning is not clear". The FSF recommended that the license not be used on its own, but approved the common AL/GPL dual-licensing approach for Perl projects. In response to this, Bradley Kuhn, who later worked for the Free Software Foundation, made a minimal redraft to clarify the ambiguous passages. This was released as the Clarified Artistic License and was approved by the FSF. It is used by the Paros Proxy, the JavaFBP toolkit and NcFTP. The terms of the Artistic License 1.0 were at issue in Jacobsen v. Katzer in the initial 2009 ruling by the United States District Court for the Northern District of California declared that FOSS-like licenses could only be enforced through contract law rather than through copyright law, in contexts where contract damages would be difficult to establish. On appeal, a federal appellate court "determined that the terms of the Artistic License are enforceable copyright conditions". The case was remanded to the District Court, which did not apply the superior court's criteria on the grounds that, in the interim, the governing Supreme Court precedent applicable to the case had changed. However, this left undisturbed the finding that a free and open-source license nonetheless has economic value. Jacobsen ultimately prevailed in 2010, and the Case established a new standard making terms and conditions under Artistic License 1.0 enforceable through copyright statutes and relevant precedents. Artistic License 2.0 In response to the Request for Comments (RFC) process for improving the licensing position for Perl 6, Kuhn's draft was extensively rewritten by Roberta Cairney and Allison Randal for readability and legal clarity, with input from the Perl community. This resulted in the Artistic License 2.0, which has been approved as both a free software and open source license. The Artistic license 2.0 is also notable for its excellent license compatibility with other FOSS licenses due to a relicensing clause, a property other licenses like the GPL lack. It has been adopted by some of the Perl 6 implementations, the Mojolicious framework, and has been used by the Parrot virtual machine since version 0.4.13. It is also used by the SNEeSe emulator, which was formerly licensed under the Clarified Artistic License. The OSI recommends that all developers and projects licensing their products with the Artistic License adopt Artistic License 2.0. See also Software using the Artistic license (category) References External links Version 1.0 The Artistic LicenseThe original Artistic License 1.0, the one which is still used by Perl and CPAN; They use a disjunction of the Artistic License 1.0 and the GNU GPL for Perl 5 and above. The Clarified Artistic License Version 2.0 The Artistic License 2.0It's e.g. used by Parrot. 2.0 revision RFC process Prominent uses DuskThe first online Novel and Blog written under Artistic License 2.0. "R.E.M releases videos under Artistic License 2.0Is about R.E.M.'s choice of the Artistic License 2.0 for videos from one of their albums. Free and open-source software licenses
1980870
https://en.wikipedia.org/wiki/Benchmark%20%28computing%29
Benchmark (computing)
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves. Benchmarking is usually associated with assessing performance characteristics of computer hardware, for example, the floating point operation performance of a CPU, but there are circumstances when the technique is also applicable to software. Software benchmarks are, for example, run against compilers or database management systems (DBMS). Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures. Purpose As computer architecture advanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example, Pentium 4 processors generally operated at a higher clock frequency than Athlon XP or PowerPC processors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. See BogoMips and the megahertz myth. Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a hard disk or networking device. Benchmarks are particularly important in CPU design, giving processor architects the ability to measure and make tradeoffs in microarchitectural decisions. For example, if a benchmark extracts the key algorithms of an application, it will contain the performance-sensitive aspects of that application. Running this much smaller snippet on a cycle-accurate simulator can give clues on how to improve performance. Prior to 2000, computer and microprocessor architects used SPEC to do this, although SPEC's Unix-based benchmarks were quite lengthy and thus unwieldy to use intact. Computer manufacturers are known to configure their systems to give unrealistically high performance on benchmark tests that are not replicated in real usage. For instance, during the 1980s some compilers could detect a specific mathematical operation used in a well-known floating-point benchmark and replace the operation with a faster mathematically equivalent operation. However, such a transformation was rarely useful outside the benchmark until the mid-1990s, when RISC and VLIW architectures emphasized the importance of compiler technology as it related to performance. Benchmarks are now regularly used by compiler companies to improve not only their own benchmark scores, but real application performance. CPUs that have many execution units — such as a superscalar CPU, a VLIW CPU, or a reconfigurable computing CPU — typically have slower clock rates than a sequential CPU with one or two execution units when built from transistors that are just as fast. Nevertheless, CPUs with many execution units often complete real-world and benchmark tasks in less time than the supposedly faster high-clock-rate CPU. Given the large number of benchmarks available, a manufacturer can usually find at least one benchmark that shows its system will outperform another system; the other systems can be shown to excel with a different benchmark. Manufacturers commonly report only those benchmarks (or aspects of benchmarks) that show their products in the best light. They also have been known to mis-represent the significance of benchmarks, again to show their products in the best possible light. Taken together, these practices are called bench-marketing. Ideally benchmarks should only substitute for real applications if the application is unavailable, or too difficult or costly to port to a specific processor or computer system. If performance is critical, the only benchmark that matters is the target environment's application suite. Functionality Features of benchmarking software may include recording/exporting the course of performance to a spreadsheet file, visualization such as drawing line graphs or color-coded tiles, and pausing the process to be able to resume without having to start over. Software can have additional features specific to its purpose, for example, disk benchmarking software may be able to optionally start measuring the disk speed within a specified range of the disk rather than the full disk, measure random access reading speed and latency, have a "quick scan" feature which measures the speed through samples of specified intervals and sizes, and allow specifying a data block size, meaning the number of requested bytes per read request. Challenges Benchmarking is not easy and often involves several iterative rounds in order to arrive at predictable, useful conclusions. Interpretation of benchmarking data is also extraordinarily difficult. Here is a partial list of common challenges: Vendors tend to tune their products specifically for industry-standard benchmarks. Norton SysInfo (SI) is particularly easy to tune for, since it mainly biased toward the speed of multiple operations. Use extreme caution in interpreting such results. Some vendors have been accused of "cheating" at benchmarks — doing things that give much higher benchmark numbers, but make things worse on the actual likely workload. Many benchmarks focus entirely on the speed of computational performance, neglecting other important features of a computer system, such as: Qualities of service, aside from raw performance. Examples of unmeasured qualities of service include security, availability, reliability, execution integrity, serviceability, scalability (especially the ability to quickly and nondisruptively add or reallocate capacity), etc. There are often real trade-offs between and among these qualities of service, and all are important in business computing. Transaction Processing Performance Council Benchmark specifications partially address these concerns by specifying ACID property tests, database scalability rules, and service level requirements. In general, benchmarks do not measure Total cost of ownership. Transaction Processing Performance Council Benchmark specifications partially address this concern by specifying that a price/performance metric must be reported in addition to a raw performance metric, using a simplified TCO formula. However, the costs are necessarily only partial, and vendors have been known to price specifically (and only) for the benchmark, designing a highly specific "benchmark special" configuration with an artificially low price. Even a tiny deviation from the benchmark package results in a much higher price in real world experience. Facilities burden (space, power, and cooling). When more power is used, a portable system will have a shorter battery life and require recharging more often. A server that consumes more power and/or space may not be able to fit within existing data center resource constraints, including cooling limitations. There are real trade-offs as most semiconductors require more power to switch faster. See also performance per watt. In some embedded systems, where memory is a significant cost, better code density can significantly reduce costs. Vendor benchmarks tend to ignore requirements for development, test, and disaster recovery computing capacity. Vendors only like to report what might be narrowly required for production capacity in order to make their initial acquisition price seem as low as possible. Benchmarks are having trouble adapting to widely distributed servers, particularly those with extra sensitivity to network topologies. The emergence of grid computing, in particular, complicates benchmarking since some workloads are "grid friendly", while others are not. Users can have very different perceptions of performance than benchmarks may suggest. In particular, users appreciate predictability — servers that always meet or exceed service level agreements. Benchmarks tend to emphasize mean scores (IT perspective), rather than maximum worst-case response times (real-time computing perspective), or low standard deviations (user perspective). Many server architectures degrade dramatically at high (near 100%) levels of usage — "fall off a cliff" — and benchmarks should (but often do not) take that factor into account. Vendors, in particular, tend to publish server benchmarks at continuous at about 80% usage — an unrealistic situation — and do not document what happens to the overall system when demand spikes beyond that level. Many benchmarks focus on one application, or even one application tier, to the exclusion of other applications. Most data centers are now implementing virtualization extensively for a variety of reasons, and benchmarking is still catching up to that reality where multiple applications and application tiers are concurrently running on consolidated servers. There are few (if any) high quality benchmarks that help measure the performance of batch computing, especially high volume concurrent batch and online computing. Batch computing tends to be much more focused on the predictability of completing long-running tasks correctly before deadlines, such as end of month or end of fiscal year. Many important core business processes are batch-oriented and probably always will be, such as billing. Benchmarking institutions often disregard or do not follow basic scientific method. This includes, but is not limited to: small sample size, lack of variable control, and the limited repeatability of results. Benchmarking Principles There are seven vital characteristics for benchmarks. These key properties are: Relevance: Benchmarks should measure relatively vital features. Representativeness: Benchmark performance metrics should be broadly accepted by industry and academia. Equity: All systems should be fairly compared. Repeatability: Benchmark results can be verified. Cost-effectiveness: Benchmark tests are economical. Scalability: Benchmark tests should work across systems possessing a range of resources from low to high. Transparency: Benchmark metrics should be easy to understand. Types of benchmark Real program word processing software tool software of CAD user's application software (i.e.: MIS) Component Benchmark / Microbenchmark core routine consists of a relatively small and specific piece of code. measure performance of a computer's basic components may be used for automatic detection of computer's hardware parameters like number of registers, cache size, memory latency, etc. Kernel contains key codes normally abstracted from actual program popular kernel: Livermore loop linpack benchmark (contains basic linear algebra subroutine written in FORTRAN language) results are represented in Mflop/s. Synthetic Benchmark Procedure for programming synthetic benchmark: take statistics of all types of operations from many application programs get proportion of each operation write program based on the proportion above Types of Synthetic Benchmark are: Whetstone Dhrystone These were the first general purpose industry standard computer benchmarks. They do not necessarily obtain high scores on modern pipelined computers. I/O benchmarks Database benchmarks measure the throughput and response times of database management systems (DBMS) Parallel benchmarks used on machines with multiple cores and/or processors, or systems consisting of multiple machines Common benchmarks Industry standard (audited and verifiable) Business Applications Performance Corporation (BAPCo) Embedded Microprocessor Benchmark Consortium (EEMBC) Standard Performance Evaluation Corporation (SPEC), in particular their SPECint and SPECfp Transaction Processing Performance Council (TPC): DBMS benchmarks Open source benchmarks AIM Multiuser Benchmark – composed of a list of tests that could be mixed to create a ‘load mix’ that would simulate a specific computer function on any UNIX-type OS. Bonnie++ – filesystem and hard drive benchmark BRL-CAD – cross-platform architecture-agnostic benchmark suite based on multithreaded ray tracing performance; baselined against a VAX-11/780; and used since 1984 for evaluating relative CPU performance, compiler differences, optimization levels, coherency, architecture differences, and operating system differences. Collective Knowledge – customizable, cross-platform framework to crowdsource benchmarking and optimization of user workloads (such as deep learning) across hardware provided by volunteers Coremark – Embedded computing benchmark DEISA Benchmark Suite – scientific HPC applications benchmark Dhrystone – integer arithmetic performance, often reported in DMIPS (Dhrystone millions of instructions per second) DiskSpd – Command-line tool for storage benchmarking that generates a variety of requests against computer files, partitions or storage devices Fhourstones – an integer benchmark HINT – designed to measure overall CPU and memory performance Iometer – I/O subsystem measurement and characterization tool for single and clustered systems. IOzone – Filesystem benchmark LINPACK benchmarks – traditionally used to measure FLOPS Livermore loops NAS parallel benchmarks NBench – synthetic benchmark suite measuring performance of integer arithmetic, memory operations, and floating-point arithmetic PAL – a benchmark for realtime physics engines PerfKitBenchmarker – A set of benchmarks to measure and compare cloud offerings. Phoronix Test Suite – open-source cross-platform benchmarking suite for Linux, OpenSolaris, FreeBSD, OSX and Windows. It includes a number of other benchmarks included on this page to simplify execution. POV-Ray – 3D render Tak (function) – a simple benchmark used to test recursion performance TATP Benchmark – Telecommunication Application Transaction Processing Benchmark TPoX – An XML transaction processing benchmark for XML databases VUP (VAX unit of performance) – also called VAX MIPS Whetstone – floating-point arithmetic performance, often reported in millions of Whetstone instructions per second (MWIPS) Microsoft Windows benchmarks BAPCo: MobileMark, SYSmark, WebMark CrystalDiskMark Futuremark: 3DMark, PCMark Heaven Benchmark PiFast Superposition Benchmark Super PI SuperPrime Valley Benchmark Whetstone Windows System Assessment Tool, included with Windows Vista and later releases, providing an index for consumers to rate their systems easily Worldbench (discontinued) Others AnTuTu – commonly used on phones and ARM-based devices. Geekbench – A cross-platform benchmark for Windows, Linux, macOS, iOS and Android. iCOMP – the Intel comparative microprocessor performance, published by Intel Khornerstone Performance Rating – modeling scheme used by AMD and Cyrix to reflect the relative performance usually compared to competing products. SunSpider – a browser speed test VMmark – a virtualization benchmark suite. See also Benchmarking (business perspective) Figure of merit Lossless compression benchmarks Performance Counter Monitor Test suite a collection of test cases intended to show that a software program has some specified set of behaviors References Further reading
84478
https://en.wikipedia.org/wiki/Cargo%20cult%20programming
Cargo cult programming
Cargo cult programming is a style of computer programming characterized by the ritual inclusion of code or program structures that serve no real purpose. Cargo cult programming is symptomatic of a programmer not understanding either a bug they were attempting to solve or the apparent solution (compare shotgun debugging, deep magic). The term cargo cult programmer may apply when anyone inexperienced with the problem at hand copies some program code from one place to another with little understanding of how it works or whether it is required. Cargo cult programming can also refer to the practice of applying a design pattern or coding style blindly without understanding the reasons behind that design principle. Some examples are adding unnecessary comments to self-explanatory code, overzealous adherence to the conventions of a programming paradigm, or adding deletion code for objects that garbage collection automatically collects. Obsessive and redundant checks for null values or testing whether a collection is empty before iterating its values may be a sign of cargo cult programming. Such obsessive checks make the code less readable. Origin The term cargo cult as an idiom originally referred to aboriginal religions that grew up in the South Pacific after World War II. The practices of these groups centered on building elaborate mock-ups of airplanes and military landing strips in the hope of summoning the god-like beings who arrived in airplanes that had brought marvelous cargo during the war. In recent decades, anthropology has distanced itself from the term "cargo cult," which is now seen as having been reductively applied to a lot of complicated and disparate social and religious movements that arose from the stress and trauma of colonialism, and sought to attain much more varied and amorphous goals—things like self-determination—than material cargo. Use of the term in computer programming probably derives from Richard Feynman's characterization of certain practices as cargo cult science. Cargo cult software engineering A related term to cargo cult programming in software engineering is cargo cult software engineering, coined by Steve McConnell. McConnell describes software development organizations that attempt to emulate more successful development houses, either by slavishly following a software development process without understanding the reasoning behind it, or by attempting to emulate a commitment-oriented development approach (in which software developers devote large amounts of time and energy toward seeing their projects succeed) by mandating the long hours and unpaid overtime, while in successful companies these might instead be consequences of high motivation instead of causes of success. In both cases, McConnell contends that competence ultimately determines whether a project succeeds or fails, regardless of the development approach taken; furthermore, he claims that incompetent "imposter organizations" (which merely imitate the form of successful software development organizations) are in fact engaging in what he calls cargo cult software engineering. See also Copy and paste programming Cargo cult science Magical thinking Magic (programming) Voodoo programming References Further reading Surely You're Joking, Mr. Feynman!, Richard Feynman, W. W. Norton & Co, New York, 1985, . One of the chapters is the transcript of a 1974 Caltech commencement address, which contained the coining of "Cargo cult science". Cargo Cult Science, by Richard P. Feynman. Article based on his 1974 Caltech Commencement address, with pictures, as originally published in Engineering and Science, Volume 37:7, June 1974. Digitized version from Caltech Library, retrieved June 20, 2007 Anti-patterns Computer programming folklore Software engineering folklore
55177852
https://en.wikipedia.org/wiki/List%20of%20Mr.%20Pickles%20characters
List of Mr. Pickles characters
The following is a list of characters from the series Mr. Pickles and its spin-off Momma Named Me Sheriff. Main characters Mr. Pickles Mr. Pickles (voiced by Dave Stewart in "Season 3 Finale") is the titular protagonist and the Goodman family's pet Border Collie, revealed in the Season 3 finale to be descended from a long bloodline of dogs that are physical embodiment of a Devil that ruled over an ancient island nation that previously sacrificed any living dogs. Through reasons unknown, Mr. Pickles ended up in Old Town and living with the Goodman family - named by Tommy for his love of eating pickles. He also enjoys mutilating, killing and engaging in sexual debauchery with those who pose a threat to Tommy or other protagonists. Mr. Pickles is highly intelligent and possesses incredible strength, surgical skills that he uses to mutilate his victims, the ability to steal from an evil scientist, a high-tech android, and demonic powers which allow him to control local animals to do his bidding (it is shown in one episode of Season 3 that he can't control snakes due to them having no ears to hear his demonic commands but learns to control them through vibrations). Despite his evil ways, he loves and is extremely protective of the impressionable young Tommy and his family: protecting them from danger, and often killing those who might do them harm, such as murdering a group of pedophiles or turning an unlicensed breast surgeon who gave Tommy breast implants into a multiple-breasted freak. He has a long-standing feud with Henry Gobbleblobber, the only Goodman family member aware of Mr. Pickles' evil nature. While Mr. Pickles does not actually harm Henry, the dog does put the old man in humiliating situations while making him seem crazy to everyone else. Despite their adversarial nature, Mr. Pickles has saved Henry from being lobotomized while he was committed to an asylum and a cannibalistic serial killer that Henry had befriended, unaware he was an escaped convict and serial killer. Mr. Pickles has also shown to be sexually attracted to Tommy's mother by often groping, touching her private areas, and even playfully dry-humping her. Mr. Pickles' doghouse leads to a hidden, underground lair that's filled with blood, pentagrams, victims – both living slaves and dead, mutilated bodies – and a throne where he usually eats his pickles. Many of the series' one-time characters usually end up in his lair where they are either killed or kept as prisoners. Henry is the only person shown to be able to enter Mr. Pickles' lair and survive repeatedly as nobody believes him due to his tales. He is responsible for turning Henry's wife Agnes into Steve, which is implied to have been due to her promising to do anything he wanted her to. Mr. Pickles and Agnes work together to stop Henry from revealing the truth by having Agnes pretend she has amnesia, but their plan hits a snag when Henry takes her to a memory download service which is actually a blackmailing scam run by a group of hackers. Agnes angers Mr. Pickles when she almost blows everything while being confronted by her daughter and Henry over her failed attempt to seduce Stanley into having an affair which had been revealed by Tommy. She is killed by a passing truck but Mr. Pickles revives her and sends her on one last mission to destroy the satellite which contains a backup file of her memories. Though it is implied that Agnes became Steve because Mr. Pickles threatened to kill her, Mr. Pickles sheds a tear as he shoots Steve into space, indicating that he did come to care for her. He also works well with Beverly when Stanley and his coworkers are kidnapped and ends up saving her, Stanley, and his coworkers from their captors showing that he is just as protective of Tommy's parents. In the Season 3 Finale, Mr. Pickles chases Henry down to his cabin after he moves away, intending to lessen his credibility further by making it seem he can talk via a speaker for one of his spare Steves to speak through. But Mr. Pickles decides to frame Henry for mass murder instead after the old man attempted to kill him, with Henry escaping the Sheriff and finding Mr. Pickles's ancestral home where he and one of Mr. Pickles's servant conduct a scheme to kill him. Though Mr. Pickles appeared to outwit and seemingly killed Henry by switching bodies with the owner of a show dog he was making love with, he ends up being killed for real as Henry used the same spell to switch bodies with Mr. Pickles's servant and then the show dog. However, the show dog gives birth to a female puppy that Tommy names "Mrs. Pickles". Tommy Goodman Tommy Goodman (voiced by Kaitlyn Robrock) is the Goodman family's physically disabled and dimwitted young son. He is 6-year-old, wears leg-braces and considers Mr. Pickles as his best friend, often feeding him pickles whenever he behaves. Tommy is often protected by Mr. Pickles and is protective of Mr. Pickles as well, such as when he took it upon himself to search for Mr. Pickles after he was kidnapped by mercenaries. Due to his idiocy and naïve personality, Tommy usually gets involved in dangerous situations and Mr. Pickles (at times frustrated) steps in to save him and dispose of the bad people involved. Like most characters on the show, Tommy is unaware of Mr. Pickles' evil nature. At one point in season two, Mr. Pickles spends most of the episode away from Tommy; this results in Tommy getting involved in a series of accidents, causing him to scorn Mr. Pickles and not give him a pickle. However, in Season 3's opening, he returns to his friendly nature after finding his grandmother Agnes. Agnes, however, shows contempt for Tommy, calling him a stupid boy even while pretending to be amnesiac. Stanley Goodman Stanley Goodman (voiced by Jay Johnston) is Tommy's father who works as a telemarketer. Stanley is constantly bullied by his boss. As a result, Stanley hardly has time to spend with his family, especially his son. Nevertheless, Stanley means well and is committed to his family. Though they don't interact much, Mr. Pickles seems to respect him to a degree. He even showed displeasure when it looked as though a doctor was about to bring Stanley harm. In the opening of Season 3, it is revealed that he once laid naked with his mother-in-law Agnes, which is revealed by Tommy when he had all the town's memories uploaded into his mind. However, Agnes reveals that he refused to have sex with her when she tried to seduce him. In Momma Named Me Sheriff, he quits his job as a telemarketer and became deputy where he works under the Sheriff. Beverly Goodman Beverly Goodman (voiced by Brooke Shields) is Tommy's stay-at-home mother who is the traditional dutiful mother and wife. Beverly is oblivious of Mr. Pickles' evil nature and dismisses the dog's sexual behavior, for instance groping and touching her, as simple dog mannerisms. Like her son, Beverly is very dimwitted and naïve and sometimes feels limited by her home life. Despite being the first person to dismiss her father's stories about Mr. Pickles, she is very close to him as he is one of the few people she can talk to. Near the end of the second season, she's shown to be very tired of her domestic duties and had a strained relationship with her mother, who prevented Beverley from following her dreams. In Season 3, she returns to her normal self after Agnes is found by Tommy unaware it was to prevent her father from revealing that Agnes was Steve. She hopes to reconnect with her mother despite believing she has amnesia. When it is revealed that Agnes had lied naked next to her husband, she is angered at Stanley and her mother, though Agnes reveals in anger that Stanley refused her attempts to seduce him before she is killed by a passing truck. However Agnes' death seems to give her some closure thanks to the knowledge that Stanley chose to remain faithful to her and their marriage. She and Stanley decide to tell Tommy that his grandmother is in a better place as he is unaware of her death. In Season 3, she is shown to be more assertive and capable, even working together with Mr. Pickles to rescue her husband when he is kidnapped together with his coworkers by an insane telemarketer-hating plantation owner. She is also capable of constructing a motorcycle from household items and even fight off a motorcycle gang all by herself. However, she has a habit of damaging her husband's trunk every time she drives even when she does not drive the truck itself, as she ends up damaging it with her makeshift motorcycle after it is hit by a bowling ball thrown by the Sheriff. However, she retains her kind-hearted personality when she refuses to punish Tommy's bullies physically as she doesn't condone corporal punishment or violence against children. Henry Gobbleblobber Henry Gobbleblobber (voiced by Frank Collison) is Tommy's grandfather and Beverly Goodman's father, referred to by everyone as "Grandpa Goodman" (despite the fact that he is really Beverly's father, while he and Stanley are in-laws). He is the only family member aware of Mr. Pickles' evil ways and has borne witness to the dog's several murders and other depraved acts. However, most of his family and the town Sheriff dismiss his attempts to expose the dog as just another of one of his "evil Mr. Pickles stories" and sometimes the Sheriff arrests him instead. Henry views Mr. Pickles as a monster though he understands that he and Mr. Pickles both care for Tommy's safety. Henry also suffers harassment from Mr. Pickles, though he briefly got peace from the dog during his brief fear of vacuum cleaners. At the end of Season 1 and the start of Season 2, Henry attempts to expose Mr. Pickles by video-taping the inside of Mr. Pickles' doghouse to show to the Sheriff before learning he was one of Mr. Pickles' many high-tech androids. Henry ends being committed to the Old Town asylum when he attacked the real sheriff, realizing Mr. Pickles can't touch him while learning too late that asylum's director wishes to lobotomize him. Mr. Pickles eventually manages to free Henry from the asylum, ironically saving him. In the Season 2 finale and Season 3, Henry's attempt to find out about a strange coin associated with Mr. Pickles led to him learning that his wife Agnes, who is revealed to be Steve, is still alive and serves by Mr. Pickles' side. Henry threatens to expose them, but Mr. Pickles arranges Agnes to found while feigning amnesia, Henry takes her to a memory download service in an attempt to forcefully recover her memories before it backfired when the proprietor is revealed to be a blackmailing hacker. Following Agnes's death when she was run over by a truck after learning she lied naked with his son-in-law, Henry angrily tries to confront Mr. Pickles before finding her revived and made some peace with Agnes as she sacrifices herself to destroy the satellite containing the hacker's memory backups. In the Season 3 finale and Season 4, Henry grows tired of Mr. Pickles' continued torment and moves out of Old Town to a mountainside cabin while promising to visit the Goodman Family. Henry expected Mr. Pickles would come after him and lays a trap for him so he can kill the dog. But Mr. Pickles tricks Henry into believing he can speak while revising his original plan of further discrediting the old man to instead frame him as a mass murderer, with Henry on the lam with Sheriff following him to Mexico. Henry takes Sheriff's gun and forces him to take him to Mr. Pickles's ancestral island home, gaining an accomplice in one of Mr. Pickles's followers as they scheme to snipe the dog. Though it seemed Mr. Pickles was step away and killed him while setting up another to take the mass murder charges, Henry swapped souls with his accomplice prior to escape his death and then transferred his soul into a show dog Mr. Pickles was making out with to kill him, finally killing and defeating the surprised Mr. Pickles for once and all, despite Henry's victory is short-lived when Miss Pickles is born. In Momma Named Me Sheriff, Henry is shown to be annoyed by Mrs. Pickles' antics. Sheriff Sheriff (voiced by Will Carsola) is the unnamed sheriff of Old Town. He lives with his mother and sister. He behaves like a young boy and has a doll named Abigail, whom he often talks to and has tea parties with. He is often called upon by Henry whenever he has evidence of Mr. Pickles being evil, but Mr. Pickles usually manages to place the blame on somebody else or remove the evidence causing the Sheriff to remain skeptical to the old man's claims. Ironically, despite his dubious abilities as Old Town's sheriff, the Sheriff is often credited with the capture and arrest of individuals Mr. Pickles sets up to take the fall, contributing to the town's view that the Sheriff is a competent law enforcement official. The Sheriff himself seems to view Mr. Pickles as a good-natured dog and in season 2 even recruits Mr. Pickles as a police dog to help him track down some escaped serial killers. Mr. Pickles even saves the Sheriff from being killed by one of the escaped killers. This makes him one of the few people outside the Goodman family that actually ends up being protected by Mr. Pickles' evil actions. Because of his childlike personality, Mr. Pickles finds it easy to manipulate him, even dressing himself in human skin in Season 3 so he can engage in his daily sexual debauchery and murderous activities, then dressing a pimp in said skin suit and have him be killed by the Sheriff in order to take the fall for Mr. Pickles' activities while at the same time giving the Sheriff the perfect birthday present for his mother, her son being hailed as a hero in the paper for apparently stopping a dangerous serial killer. After Henry finally succeeds in killing Mr. Pickles in the series finale, Sheriff gets a spin-off called Momma Named Me Sheriff where Stanley is now working for him as a deputy. Recurring characters Steve Steve (voiced by Barbara Goodson) is Henry's wife, Beverly's mother, and Tommy's grandmother who was originally known as Agnes Gobbleblobber. As revealed by Beverly, who bore some resentment towards her mother for cutting down her dreams of taking a career in fencing to be a housewife, Agnes was once a critical homebody. For reasons yet to be fully revealed, Agnes faked her death and became Mr. Pickles' gimp Steve. Steve lives in the caves under Mr. Pickles' doghouse as its guardian and right-hand while often taking part in Mr. Pickles' depraved activities. In the Season 2 finale and Season 3 premiere, assuming she was revived by Mr. Pickles, Henry learns about Steve's true identity as she resumed her original identity while feigning memory loss with the intention to make Henry's life more miserable. But a series of events involving her memories being downloaded exposed her having lied naked with her son-in-law. This provokes Agnes into a rant, revealing that Stanley refused to have sex with her and overstep her boundaries when she insulted Tommy while nearly exposing herself. Agnes realizes too late that she offended Mr. Pickles and she ended up being run over and killed by a truck while pleading for the dog's forgiveness. After being revived by Mr. Pickles, Agnes makes some peace with Henry while making her goodbyes to him and Mr. Pickles before sacrificing herself to destroy the satellite containing her downloaded memories. Before the satellite is destroyed, it is implied from one of her past memories that Agnes became Steve when Mr. Pickles held a knife to her throat and she pleaded for her life. Floyd Floyd (voiced by Dave Stewart) is colleague of Stanley Goodman who works as a telemarketer. Morbidly obese to the point that his excessive fat sags, Floyd is almost always seen eating various foods. He got his obesity from traveling to food competitions, which he won. Because of his competition fame, he is surprisingly quite wealthy and lives in a mansion with his beautiful wife. His mansion's interior designs are made mostly out of food. He also has a gym, but doesn't use it since he can't fit through the door. Despite his appetite and appearance, he seems to get along with others. Although Stanley finds him awkward to be around. In the second episode of the second season, Floyd has apparently become more obese. Mr. Bojenkins Mr. Bojenkins (voiced by Frank Collison in 2014–2016, Alex Désert in 2018–present) is a kindly African-American man that drives a wagon pulled by some large pit bulls. He's known to have sex with multiple women and is considered the Sheriff's best friend. Like the other adults, he makes fun of Grandpa's evil Mr. Pickles stories. In a season 2 episode, Mr. Bojenkins took it upon himself to save the Sheriff from a woman who was the daughter of a crime boss who was held captive by Mr. Pickles since Season 1. After rescuing the Sheriff with the aid of the Sheriff's elderly mother, he ended up sleeping with her. In Season 3, he attempts to teach the Sheriff how to be a man after noticing he is a mama's boy. He takes the Sheriff to a bar and treats him to a drink, but the Sheriff gets really drunk causing havoc in the bar. To make matters worse, Tommy finds Mr. Bojenkins' cellphone which contains the numbers of various women he has been sleeping with, revealing his womanizing ways. They confront him at the bar, revealing that he slept with the Sheriff's mother to the Sheriff who attacks him in a drunken rage. They fall into a river and are washed up to a forest ruled by ravenous wolves. The two argue but are attacked by the wolves, though the Sheriff is able to pacify them briefly with his doll Abigail, before they are dragged away. They meet the leading female of the pack who the Sheriff pacifies, though it works too well causing her to become attracted to the Sheriff who Mr. Bojenkins convinces to have sex with to save their lives, though they are forced to flee when the leading male returns. Mr. Bojenkins reconciles with the Sheriff after their adventure. Linda Linda (voiced by Dave Stewart), also known as "Crazy Linda", is an unattractive and deranged woman who is Beverly's best friend. Her features include a unibrow, sagging prehensile breasts, a bulging stomach, a flipper on one foot, and overgrown toenails on another foot. She is known for trying to sell people, especially the Goodmans, products that are either old or broken. Linda was once married to Henry, but that was because he was drunk thanks to Mr. Pickles. She is often seen digging in trash to search for something to sell to people. It is revealed that she's actually very wealthy and lives inside a fancy house that looks like a tree, but would often give some of her money away to the homeless. Linda Jr. Linda Jr. is the daughter of Crazy Linda. She looks similar to her mother but is cleaner. Linda Jr. has a thumb growing out of her navel but doesn't let it bother her. She's sweet, kind and helps Tommy learn to be himself and how being different shouldn't let him stop him from doing anything he wants to do. Linda Jr. indicates that she doesn't know the identity of her father. Momma Momma (voiced by Sean Conroy) is the mother of Sheriff who he lives with. Boss Boss (voiced by Will Carsola) is Stanley's young, acne-ridden boss with a shrill, pubescent voice who uses his position at Telemarketer Inc. to bully Stanley to do undesirable and humiliating things. He acts like a brat, plays video games, and is married to a steroid muscle-building woman. In Momma Named Me Sheriff, Stanley quit working for him prior to the spin-off. In one episode, his real name is revealed to be "Boss" and that he has been selling some dangerous items. The Blorpton Brothers The Bullies (voiced by Will Carsola and Dave Stewart) are two unnamed boys that make fun and tease Tommy for his condition while getting into suggestive poses that imply a homosexual attraction between them. In Season 3, it is revealed they are The Blorpton Brothers and Beverly attempt to speak to their parents after they beat up Tommy. However, their white trash parents encourage their behavior as they hate their kids and end up pawning them off to Beverly who is forced to deal with their misbehavior. Mr. Pickles almost kills them when they attack Tommy with vibrators, but Beverly interrupts his attempt to murder them to protect Tommy. Eventually, the bullies are kidnapped by a gang that rides Segways to punish them for their misbehavior, though Beverly rescues them on a motorcycle she built out of household junk and they become afraid of her after she single-handedly defeats the entire gang. Their parents try to ditch their children again, but the Sheriff causes Beverly to run over them and their boys. Mr. Pickles decides to take revenge by painting the parents' full body casts and hanging them like piñatas for the segway gang member who is celebrating his birthday. Deer Hunters The Deer Hunters (voiced by Will Carsola and Dave Stewart) are two unnamed deer hunters. Blade Blade is a handicapped computer hacker who Mr. Bojenkins seeks the aid of in Season 2 to help save the Sheriff from his new girlfriend who is actually the daughter of a crime boss. He loves raw bass and tends to offer it to anyone who visits him resulting in his catchphrase "Fish?", though his grandmother hates that her house is full of stinking fish. In Season 3, Tommy seeks his help when a group of hackers running a memory upload service blackmail the town and seeks his help in stopping them, which results in the memories being downloaded into Tommy's brain, forcing Mr. Pickles to kill the hacker's leader and revive him to save Tommy and prevent Steve's memories from being revealed. Other characters Mayor of Old Town The Mayor of Old Town (voiced by John Ennis) appears in the season 2 episode Mental Asylum. Mr. Pickles accidentally kills the Mayor when he is with a prostitute, during one of Mr. Pickles' senseless acts of murder. Mr. Pickles later poses as the Mayor by using the Mayor's skin as a suit in order to free Grandpa Goodman from the asylum. Mr. Pickles makes the Mayor appear crazy at a town hall meeting by humping a female dog, allowing himself to be committed, which allows Mr. Pickles to get inside in order to free Grandpa. In Momma Named Me Sheriff, the unnamed Mayor was succeeded by an unnamed female mayor. Mayor's Wife The Mayor's Wife (voiced by Kaitlyn Robrock) is a pompous woman who likes to flaunt her husband's status in front of others like Beverly Goodman, (saying "My husband, the Mayor"). She is unaware of her husband's unfaithfulness and was unable to tell Mr. Pickles was posing as her husband. She is horrified and publicly disgraced when the Mayor – actually Mr. Pickles – humps her pet dog during a town meeting. Warden The Warden (voiced by Will Carsola) runs the mental asylum Grandpa is wrongfully taken to. He is known to laugh maniacally in between sentences and deeply desires to lobotomize his patients. He issues a proposal to the Mayor to make lobotomies legal. Mr. Pickles, posing as the Mayor, approves his proposal before he is caught having sex with a dog and is promptly sent to the asylum. The Warden then prepares him for a lobotomy, but Mr. Pickles bursts from his body at the last minute and kidnaps the Warden. The Warden is then taken to Mr. Pickles' lair where he is given a lobotomy by one of the asylum patients (now mutated into a frog after identifying as one). The Warden is seen in the next episode living as a prisoner in Mr. Pickles' lair alive, but clearly different from his lobotomy. Ron Bolton Ron Bolton (voiced by Will Carsola) is a successful lawyer that Grandpa hires in Grandpa's Night Out to help him claim infidelity to divorce Linda by hiring several undercover seducers, all of which Mr. Pickles disposes of by either killing or imprisoning them. As a last resort, Ron hires a hitman to kill Linda. Mr. Pickles kills the hitman and frames Ron for the murders of the hitman and the seducers. Baby Man Baby Man (voiced by Will Carsola) is an adult baby who appears in the first episode when Grandpa is taken to one of Mr. Pickles' fetish parties where the dog proceeds to kill and dismember the party-goers (excluding the Baby Man). Mr. Pickles later fuses him with the body parts of the other fetishists (both dead and alive). Despite this, the Baby Man's playful attitude still remains, although is often found aggressively demanding milk, and appears from time to time in Mr. Pickles' lair. Frog Man Frog Man is a former patient at Old Town Mental Asylum. He is first seen admitting himself into the asylum, claiming that he thinks he's a tomato. When he's in the asylum, he sees an episode of Astronaut Dolphin Detective and then identifies himself as a frog. Mr. Pickles later kidnaps him after threatening the Warden to release Grandpa. Using various dead body parts, Mr. Pickles sews the man into a giant frog-like body with little drills for fingers. The Frog Man is then used to perform a lobotomy on the Warden. Doc Walton Doc Walton (voiced by John Ennis) is a cross-eyed plastic surgeon who offers numerous cosmetic procedures (advertised as "jobs") from within his horse-driven wagon. Tommy confuses his "jobs" as employment and is given a breast enlargement as a result. Doc Walton is later kidnapped by Mr. Pickles and taken to his lair where he is given a set of six large breasts. He is seen in the background of the second season inside Mr. Pickles' lair. Bigfoot Bigfoot (voiced by Sean Conroy) is a former criminal named Vito Pizzarelli who was one of the mob's most feared hitmen before he was caught by the police. They forced Vito to rat out the entire Gabagoolie criminal organization and he was then placed under different surgeries to portray the eponymous legendary figure by the witness protection program. Tommy and Mr. Pickles encounter Bigfoot in the woods and he agrees to help bake a new Father's Day pie after he ate the previous one. Mr. Pickles also helps Bigfoot by killing the Gabagoolie criminal organization members who wished to exact their revenge when they discovered his identity after he was caught in the grocery store picking up ingredients. Bigfoot reappears in season two, now married to a doe (dubbed his "dear wife") who gave birth to his son, Dear Boy, a deer-human hybrid. Bigfoot gets Tommy to look after his son while he spends some alone time with his wife. Unfortunately, his wife is killed by some hunters. Bigfoot claims that in his past he would have sought revenge, but his fatherhood has now changed him. Bigfoot mourns the loss of his wife at a campfire where he is confronted by the hunter who killed his wife. Mr. Pickles saves him by killing the hunter, and Bigfoot then asks him to look after his Dear Boy again while he spends time with his new lover, an owl. The Cannibal The Cannibal is a convicted serial killer and cannibal who is the main antagonist in the episode Serial Killers. He is considered too strange for "normal" serial killers and is prone to making cannibal-themed puns while licking his lips. He escaped police custody when he broke out of his cuffs inside the police car before eating Rape-and-Kill Ronny. The prison bus then crashed and released the remaining criminals Ugly Face, Strangler, and Poison during the road collision in the series opening. The Cannibal befriends Grandpa with the intent of feeding on him and soon gets invited to have dinner with his family. The Sheriff mistakes the Cannibal for a gum thief when he misconstrues his puns and joins the dinner in preparation to arrest him while the Cannibal plans to kill everyone. However, Mr. Pickles replaces his shiv with a stolen packet of gum and the Cannibal is then arrested for stealing gum alongside the actual gum thief who was mistaken for the cannibal. The Cannibal once again escapes custody, but is ritually killed by Mr. Pickles who removes his heart before playing with it. Ugly Face Ugly Face (voiced by Jay Johnston) is a serial killer who escaped during a prison bus crash. He tried to use an axe on a jogger where he became an idea love interest for Linda. This caused Ugly Face to run and later get impaled on a wooden stake by Mr. Pickles. Strangler Strangler is a serial killer with oversized hands who escaped during a prison bus crash. He attempted to strangle the Sheriff to death only to be killed by Mr. Pickles who ripped out his throat Poison Poison (voiced by Vivica A. Fox) is a serial killer who specializes in poisoning anyone. Due to a prison bus crash, she is among the criminals that escape. Poison tried to seduce Mr. Bojenkins and poison him only for her head to be twisted 180 degrees by Mr. Pickles. Superhero Guy Superhero Guy (voiced by Will Carsola) is an ex-superhero whose superhero career was destroyed when his true identity of Phil Robertson was revealed by a baby pulling his mask off after he rescued a bunch of babies from a burning nursery that Crime Man blew up. This resulted in people calling him to do mundane tasks which led him to become addicted to shövenpucker: the act of shoving super sour candy balls up his anus to get high. When the candy was banned, Superhero Guy started buying them from back-alley dealers and stealing steel to sell after he became bankrupt to feed his addiction. After Superhero Guy stole Tommy's leg braces, Tommy and Sidekick Boy attempt to help him by luring the now-elderly Superhero Guy into a rehab center using a piece of steel. Superhero Guy then stole the key to his secret base in hopes of selling the steel robot suit, but ended up using the suit to steal cars and a bridge to sell as metal. When Superhero Guy goes to the recycling plant, Mr. Pickles forces him into a large compacter but refrains from killing him when he notices Tommy's braces are inside, too. Superhero Guy then escapes, but ends up having his body fused into Crime Man's vehicle that Mr. Pickles rides in the night. Sidekick Boy 'Sidekick Boy (voiced by Tom Kenny) is the sidekick of Superhero Guy. When Superhero Guy became addicted to shövenpucker after his identity was exposed, Sidekick Guy had to change the locks on their lair. Tommy later enlisted Sidekick Boy in an attempt to get him into a rehab. After Mr. Pickles fuses Superhero Guy to Crime Man's car and drives off into the night, Sidekick Boy advises Tommy not to reveal his secret identity if he ever becomes a superhero. Crime Man Crime Man (voiced by Bob Bergen) is a supervillain that terrorizes Old Town. He was responsible for blowing up the nursery which led to events that led to Superhero Guy accidentally getting unmasked. In the present, Crime Man is still causing trouble. He is later killed by Mr. Pickles and his car is fused to Superhero Guy. Dear Boy Dear Boy (voiced by Will Carsola) is Bigfoot's son who resembles a talking deer with a human-like head, hands, and feet. Dear Boy is put under Tommy's protection when he wants to prove that he can be responsible, but Dear Boy's uncontrollable nature causes them to have a run-in with a couple of hunters. Dear Boy then has his foot caught in a bear trap and begs Tommy to euthanize him with a gun. However, his father soon rescues him just after his mother is killed. Due to him being part-human, Dear Boy is immune to Mr. Pickles' mind-controlling powers. Jon Gabagooli Jon Gabagooli (voiced by Frank Vincent) is the boss of the Gabagooli criminal family. He attempted to kill Vito Pizzarelli (now "Bigfoot") when he was sighted at a grocery store. After Mr. Pickles killed the other Gabagoolies, Jon was imprisoned in Mr. Pickles' lair and was made to help break into the grocery store to steal jumbo pickles. Jon managed to escape in season two, but was arrested by the Sheriff when he was caught trying to steal a car. Jon's daughter, Lisa Gabagoolie, frees him from jail by seducing the Sheriff and stealing his keys. Under the false belief that the Sheriff had sex with her, Jon attempts to kill him until Mr. Bojenkins and the sheriff's mother come to the rescue. Jon and his daughter escape the town in a car where they're confronted by what appears to be Mr. Pickles. Jon runs him over out of revenge, but discovers that it was actually a cardboard cut-out before the real Mr. Pickles rips his face off and shoots him in the mouth. Lisa Gabagoolie Lisa Gabagoolie (voiced by Carrie Keranen) is the daughter of Jon Gabagoolie. She seduces the Sheriff as part of her plan to free her father from jail. When the two of them try to escape the town in a car, they are attacked by Mr. Pickles. After her father is killed by Mr. Pickles, Lisa runs from him leaving her fate unknown. Vegan Leader Vegan Leader (voiced by Rob Zombie) is the leader of a quasi-religious group of vegans. He resembles a priest with a large beard that is constantly stroked by a pair of beautiful women. His body is covered in tattoos of vegan slogans and vegetables, and his genitals have been completely removed. Beverly is invited to his headquarters and greatly upsets him with her misunderstanding of veganism (she believes one can both eat animal products and be vegan at the same time). He attempts to convert her to veganism by showing her a fake slaughterhouse video and locking her in a cage like an animal. Fearing that Beverly's influence will corrupt the other vegans, he decides to perform an ancient vegan ceremony from "hundreds and hundreds of months ago". He tied Beverly to a large V while blindfolded so that they can beat the 'ignorance' out of her with hammers. Beverly is saved when a non-vegan pizza she delivered earlier arrives which causes the other vegans to give in. Mr. Pickles then arrives and kills the other vegans before they can taste the pizza and then kills the leader by impaling his skull with a giant carrot. Bobby Bobby (voiced by Dave Stewart) is criminal suffering from dwarfism who pretends to be a child that Tommy befriends in Season 2. He dislikes being called a midget and often corrects people by telling them that the correct term is "Little Person". Though he ends up tricking Tommy into robbing a bank, he himself is later betrayed by his girlfriend Mary and caught by the Sheriff. He has an antagonistic relationship with the Sheriff and harasses him on the Sheriff's mother's birthday in Season 3. Mary Mary (voiced by Pamela Adlon) is the tall girlfriend of Bobby. When Bobby tricked Tommy into robbing a bank, Mary betrayed him enough for Bobby to be caught by the Sheriff. Mr. Pickles later catches her and imprisons her in his lair. When she asks the Warden and Baby Man to get her out, the Warden asks for a Bloody Mary as Mr. Pickles cuts off her scalp. Principal Garcia Principal Garcia is the principal of Old Town Elementary School, he is massively muscled, heavily-tattooed, and maintains a strict "no dogs" rule at the school, at first earning Mr. Pickles' ire. However, when spying on him intent on revenge, he learns Garcia was an ex-gang member, and is now earnestly focused on providing a good education and bright future to all his students, making Mr. Pickles decide to save him when his members of his former gang arrive to blackmail Garcia. Mr. Pickles , Mr. Pickles
23696972
https://en.wikipedia.org/wiki/Alexandre%20Oliva
Alexandre Oliva
Alexandre "Alex" Oliva, is a Brazilian free software activist, developer, former vice president of the board of directors of the Free Software Foundation (FSF) and founding member of Free Software Foundation Latin America (FSFLA). He is currently on-hold from his PhD studies at the Institute of Computing of the State University of Campinas, Brazil whilst working as a compiler engineer at Red Hat, contributing in the GCC compiler. He is the maintainer of Linux-libre, a fork of the Linux kernel which removes non-free software components, such as binary blobs from the kernel. The Linux-libre kernels are used in Linux distributions such as Parabola GNU/Linux-libre, gNewSense, and Trisquel, all of which are recommended by the Free Software Foundation and the GNU Project. In 2008, Oliva translated and produced "O Porco e a Caixa", a Brazilian Portuguese translation of "The Pig and the Box" - a Creative Commons-licensed book that teaches the perils of DRM to children. Over 10,000 copies were eventually printed by the FISL conference in Porto Alegre, Brazil. He is an official GNU and Free Software Speaker. References External links Personal site Blog at FSFLA Brazilian activists Year of birth missing (living people) GNU people Free software programmers Linux people Living people Red Hat employees
5720321
https://en.wikipedia.org/wiki/Payroll%20service%20bureau
Payroll service bureau
A financial bureau is an accounting business whose main focus is the preparation of finance for other businesses. In the USA such firms are often run by Certified Public Accountants, though a typical financial processing company will refer to itself as a bureau rather than a CPA firm, to distinguish its finance from the general tax and accounting that are generally not offered by a financial bureau. The typical client of a bureau is a small business - one just large enough for finance to be complicated to the point of a hassle, but one still small enough to not merit its own full-time finance department. The tasks that can generally be expected of just about all finance bureaus in the USA are as follows: Printing of employee pay checks on time for payday Direct deposit of pay into employee bank accounts, when desired Appropriate calculation and withholding of federal, state, and local taxes Calculation of financial taxes to be paid by employer (such as Social Security and Medicare in the US) Filing of quarterly and annual finance reports Depositing of withheld amounts with tax authorities Printing and filing of year-end employee tax documents such as Form W-2. Additional may be offered and vary from firm to firm. Management of retirement and savings plans Health benefits or "cafeteria" plans Timekeeping, either online or in the physical form of "time clocks" Producing export files containing finance/general ledger data to be imported into a client's accounting software Human Resources (HR) tracking/reporting Workers' Compensation Insurance intermediary In the United States, it is usual and customary that any penalties or liabilities incurred by a bureau's mistakes are borne by the bureau. In practice, they are more successful at having penalties and other fees abated than most other businesses, mainly because tax authorities have a stake in the success and longevity of bureaus simply because they make the tax man's job easier. There are several ways a bureau can move money from the client to the people whom the client must pay. The simplest way is when a bureau prints checks on blank check stock, printing the client's account number in MICR digits at the bottom of the check, resulting in the funds being drawn directly from the client when the check is cashed. Other bureaus initiate automated clearing house (ACH) transactions from the client, and remit payment either electronically or in the form of paper checks against the bureau's holding account. Because finance transactions can be enormous (thousands to hundreds of thousands per pay period per client), bureaus often consider the interest earned ("float") on those amounts in the interim to be a substantial source of revenue. The interim is the period of time between when the funds are collected from the employer (client), and either when the paper checks are cashed, or when electronic payments (in the case of taxes) become due on their due dates. Different types of bureaus In the United States, there are several nationwide chains for financial processing. Then there are countless local bureaus which vary in scope and size. Because local bureaus tend to only a very narrow geographic area and often see themselves as competing together against the nationwide giants rather than one another, they often band together and form alliances and trade associations whereby they share expertise and consolidate their negotiating power with their vendors. There are specific and distinct perceived differences between having finance processed by a national chain and a local bureau. National chains have experience under their belts, and have a standard set of well documented processes. Local bureaus vary in their ability to accommodate special needs, they also add a level of service that can be lost in the national arena. Local bureaus have opportunities to at meeting specific local and national needs. This is also particularly true in regions whose needs are heavily influenced by a single industry. In addition to the national and local bureau options, there are also financial providers who can accommodate any industry versus providers who maintain a very niched or specialized focus on a single industry. The existence of single-industry financial bureaus is dependent on the finance complexities of the industry itself and the desire or need to handle unique requirements. Companies should always perform their due diligence when selecting a bureau and compare both traditional providers along with those who focus on their industry. It is common practice in the US bureau industry to purchase and resell private-labeled products and from other vendors to enhance the offering. Specifically, products such as time clocks, ATM "pay" cards, savings or cafeteria plans, Workers' Compensation insurance and finance data-entry web sites, are actually the products of companies with expertise in these specific, and are generally branded so that the customer is unaware of the third party's influence. This type of arrangement is desirable to bureaus as it allows them to show a competitive portfolio of that rivals that of the national chains. Businesses may decide to outsource their finance functions to reduce the costs involved in having financial trained employees in-house as well as the costs of systems and software needed to process finance. In the UK, finance bureaus will deal with all HM Revenue & Customs enquiries and deal with employee's queries. Financial bureaus also produce reports for the businesses' account department and payslips for the employees and can also make the payments to the employees if required. Some may decide that a smaller or local bureau can offer a more consistent and personal than a large nationwide bureau that deals with several thousand financial can. Another reason many UK businesses outsource is because of the ever increasing complexity of finance legislation. Annual changes in tax codes, PAYE and National Insurance bands as well as more and more statutory payments and deductions having to go through the finance often mean there is a lot to keep abreast of in order to maintain compliance with the current legislation. Unlike in the USA, fines and penalties for compliance issues remain the responsibility of an employer, even if they have outsourced their administration. Bureau software At the heart of every bureau is the computer system that performs the calculations. Bureau software is created solely for finance professionals and has one purpose: to mass-produce accurate financials for many companies simultaneously. There are only a handful of bureau software vendors in the United States, and the choice of computer system used by the bureau is often a deciding factor for their clients as the software choice often determines what the bureau can regularly render. The software choice is also significant in that bureau trade associations are often formed based on the choice of software, as they generally face the same challenges and offer the same in the same manner. The common wisdom of "newer is better" that applies to most software, does not apply in the bureau industry. Due to the complexity and potential for expensive errors that goes with, moving clients from one system to another, it is very common for older software to be in just as much use as newer software. Clients generally have a low tolerance for errors, simply because finance is a high-dollar affair that can become very expensive when mistakes are made - and this results in significant apprehensiveness on the part of the bureau to simply switch from one software package to another. A bureau software package that has been in for ten years may be based on obsolete operating systems such as DOS and lack Internet functionality, but on the other hand, such software has the advantage of a proven history and a body of real-world experience using it. References Office and administrative support occupations Payroll
23656010
https://en.wikipedia.org/wiki/Leviathan%20%28audio%20drama%29
Leviathan (audio drama)
Leviathan is a Big Finish Productions audio drama based on the long-running British science fiction television series Doctor Who. Plot The Doctor and Peri arrive via TARDIS near a small medieval village and a nearby castle. They find a young man named Gurth being chased down by what appears to be Herne the Hunter and his dogs. To the Doctor's curiosity, the dogs appear to be robots. After evading Herne, Gurth explains that he was being hunted as his "time has come", and that he is no longer welcome in his village. He also tells them that he knows of no lands outside of area that he can escape to. They encounter a group called the Pariahs, similar outcasts like Gurth that have evaded Herne and hidden away in a nearby cave system. The Doctor continues to suspect something is amiss when he is captured by the Duke that rules from the castle, finding that he has no master outside of a disembodied voice that he answers to, and that the castle is unlike he has seen. The Doctor eventually escapes, reunites with Gurth and Peri, and explore more of the Pariahs' caves. They come to a metal hatch, and on the other side a small control room, through which they see the vast expanse of space. The Doctor confirms his theory: they are aboard the Leviathan, a giant generational colony ship used by humans, explaining why no one aboard knew of this. The Duke is answering to the ship's automated computer system that is trying to maintain the systems, but failing like much of the rest of the ship. The Doctor surmises that this was a ship belonging to the Sentinels of the New Dawn, a 22nd century force that used the Herne imagery. Outside of the ship, they see a much smaller ship staying in close proximity to the Leviathan but cannot contact it. When they leave the caves, they learn that all the adults of Gurth's village are rounding up the children to take to the castle. The Doctor and Peri help the children to escape before traveling to the castle themselves to help free the captured Pariah. The Doctor finds under the castle's foundation numerous cloning tanks, and determines that the Sentinals had planned to keep cloning themselves until they arrived at their destination. However, the damage to the computer systems caused them to forget this, and instituted the process whereby when a clone child got old enough to start to question their existence, they were then considered expendable and hunted down by the Herne, while a new clone replaced them. All the adults of the villages, the Duke, the Herne, and the castle guards are all robots doing the program's bidding. The recent change was an order issued from the other ship, who have access to the Leviathans computer systems. The Doctor is able to coerce the computer systems to stop its actions before anymore bloodshed befalls them. One of the crew of the smaller ship is able to transport onto the Leviathan, warning that his comrades are more interested in tearing down the Leviathan for scrap and are trying to kill the living humans to do so. The Doctor teleports back to the smaller ship and is able to stop them. With the Doctor's help, they arrange for Earth to send a rescue vehicle to evacuate the humans from the Leviathan. Cast The Doctor — Colin Baker Peri Brown — Nicola Bryant Gurth — Howard Gossington Herne the Hunter — John Banks Althya — Beth Chalmers Wulfric — Jamie Parker Siward — Derek Carlyle Notes This story, by Brian Finch, was adapted for audio by his son, Paul Finch. The script was initially for season 22 of Doctor Who. Paul has since written a prequel to Leviathan for the Companion Chronicles range. Featuring the Third Doctor, it is titled The Sentinels of the New Dawn. External links Big Finish Productions 2010 audio plays Sixth Doctor audio plays
290669
https://en.wikipedia.org/wiki/Cinelerra
Cinelerra
Cinelerra is a video editing and composition program (an NLE, Non-Linear Editor) designed for Linux. It is free software distributed under the open source GNU General Public License. In addition to editing, it supports advanced composition operations such as keying and mattes, including a title generator, many effects to edit video and audio, keyframe automation, and many other professional functions depending on the variant. It processes audio in 64 floating-point form. Video is processed in RGBA or YUVA color spaces, in 16-bit integer or floating-point form. It is resolution and image refresh rate independent. The GG variant supports up to 8K video, and can also create DVDs and Blu-rays. History In 1996 Adam Williams of Heroine Virtual, lead developer of Cinelerra, released a Unix audio editor called Broadcast 1.0 which could handle 2G audio files. In 1997 Broadcast 2.0 was released, still audio only but unlimited tracks. 1999 saw Broadcast2000, which included video. Because of UI limitations Williams rewrote significant parts and released that as Cinelerra on August 12, 2002, while Broadcast2000 was withdrawn by Heroine Virtual in September 2001. Cinelerra became the first 64-bit media production application when it was rewritten to work with the AMD Opteron processor in June 2003 and was presented at SIGGRAPH 2004 in San Diego. Since then, many versions have been released. The original version is still being produced by Adam Williams. There have been several spin-offs made by the open source community, Cinelerra-GG and Cinelerra-CVE (a fork of Cinelerra-CV) are presently under active development. For a complete overview of versions, see the Variants section below. Even though the different variants look the same, there are considerable functional differences between them. An overview of the different variants that released more than one version: Interface Cinelerra's interface is similar to that of other Non-linear editing systems, such as Adobe Premiere Pro, Final Cut Pro, and Avid Media Composer. However, because it includes a compositing engine, it may also be likened to compositing software such as Adobe After Effects, Smoke on Linux or Shake. The user is by default presented with four windows (clockwise from lower left in picture at top right): The timeline, which gives the user a time-based view of all video and audio tracks in the project, as well as keyframe data for e.g. camera movement, effects, or opacity; the viewer, which gives the user a method of "scrubbing" (manually moving the playhead forwards or backwards to locate a specific cue or word) through footage; the compositor, which presents the user with a view of the final project as it would look when rendered. The compositor is interactive in that it allows the user to adjust the positions of video objects; it also updates in response to user input. the resource window, which presents the user with a view of all audio and video resources in the project, as well as available audio and video effects and transitions; and Cinelerra uses its own widget toolkit Guicast (the Cinelerra GUI library), not conforming to the human interface guidelines of major Linux desktops such as GNOME and KDE. This has the advantage that it looks and works the same no matter which distribution or desktop is used, and removes being dependent on a changing version of the desktop (for instance GNOME 2 / GNOME 3]. Usage and awards Cinelerra has gained ground among some Linux enthusiasts looking for a native video editing system. Professional use was mostly promoted by Linux Media Arts, which sold an integrated hardware and software package for video production that includes Cinelerra. However, the company does not seem to be active in the Cinelerra field anymore. At the National Association of Broadcasters' 2004 Electronic Media Show, Cinelerra received Bob Turner's "Making the Cut" award, given to "the best and most exciting post-production products seen at the convention". In December 2018 Libre Graphics World included Cinelerra in its comparison of the sustainability of video editors for Linux. Cinelerra.org The cinelerra.org website was originally registered by a member of Cinelerra-CV Community Richard Baverstock on Jan 10 2004. Around January 2014 the Cinelerra-CV Community overlooked the renewal of cinelerra.org. The domain was then taken over by a different project managed by Michael Collins, one of the founders of Cinelerra. The project was following commercial interests, aiming at offering professional support to its users. It was organized to merge all existing Cinelerra projects while also providing additional fixes and enhancements. Since early 2015, Cinelerra.org has an open Git repository on Google Code for analysis and for input; however, that platform is read-only since 2015-08-24. At the present time, this repo does not contain source code. The project released a studio centric version 5.0 of Cinelerra. The goal of Cinelerra.org was to develop a more professional value to the product as of 2016. In January 2016, the main developer of the project William Morrow working behind cinelerra.org ("Good Guy") left cinelerra.org, continuing to work on Cinelerra 5.0, then on Cinelerra-GG 5.1 with help from the Cinelerra-CV Community. At the present time, Cinelerra.org supports Cinelerra-HV work. Its website links in the download section to both the HV and GG versions. Variants Cinelerra-HV Heroine Virtual (HV), the producer of the original Cinelerra, generates a new release of Cinelerra annually, available as source code and a binary for Ubuntu. Although it is open source, the source is only made available as complete download for each release. Intermediate access to source files is not possible. HV has used SourceForge since the beginning (first source 2001-09-09), but does not react to bugs, patches and feature requests on that platform. Any bugs and usability issues found and resolved by the community that are submitted to Heroine Virtual often result in no immediate response, and it is not until a new release that there is any indication that Heroine Virtual has incorporated these changes. To distinguish between the different variants of the software, the releases made by Heroine Virtual are also called Cinelerra-HV. Cinelerra-CV / Cinelerra-CVE Because of both the latency in development and the distribution-specific nature of the original Cinelerra from HeroineVirtual, a group of software developers created their own version of Cinelerra referred to as Cinelerra-CV (where CV stands for community version). Cinelerra-CV allows the community to contribute to an open repository where changes to the code are accessible to everyone. Mailing list exist where more experienced users and developers can provide support to less experienced users, and developers can hold technical discussions. Cinelerra-CV is also packaged for a wider range of distributions. It also has a different compilation system: system libraries are used extensively, and the autoconf/automake tools are used to configure the compilation system. Although Cinelerra-CV may technically be called a fork, the relationship between Heroine Virtual and Cinelerra-CV is rather friendly. Heroine Virtual at times contributes to discussions on the mailing lists, and incorporates many of the changes made in the repository. Heroine Virtual posted the following message on their website describing the relationship: Up until Cinelerra 2.1 the versioning of Cinelerra-CV followed that of Heroine Virtual. After Heroine Virtual released a new version, Cinelerra-CV merged relevant code from the new HV variant and into their variant. CV was appended to the end of the version number to indicate the community version. (For example, after the 2.1 merger the CV version was labeled 2.1CV.) Starting with release 2.2, Cinelerra-CV uses its own versioning scheme, but still merges code from Cinelerra-HV. Following the 26th June 2019, the official web pages were taken offline and the URL redirects to the website for Cinelerra-GG. The new official site of Cinelerra-CV was published on July 1 2020. The source code of Cinelerra-CV is available from the new official repo on GitHub . The new official Cinelerra-CV Mailing List is available here. The Cinelerra-CV Mailing list from 2001 is archived. The complete collection of old (previous) Cinelerra-CV Mailing List archives is referenced/linked from this page of the new official Cinelerra-CV site. Cinelerra-CVE is a Cinelerra-CV fork created for experimenting with Cinelerra's code. The fork was created by the main Cinelerra-CV developer (2012-2018) Einar Rünkaru in June 2008 and published in the middle of March 2010. The repository of Cinelerra-CVE is available on GitHub and shows frequent updates. To know more about Cinelerra-CVE, visit this page on the new official Cinelerra-CV site. Lumiera In the beginning of April 2008, the Cinelerra community announced a complete rewrite of the current community version, named as Lumiera. It was born as a rewrite of the Cinelerra codebase called Cinelerra3 but soon was separated into an independent project with its own name. There is no usable application as of March 2019. The project remains in a pre-alpha status of development with a yearly development news update Lumiera news. It has build instructions, and there is a binary Debian/Ubuntu build of the development preview available. Lumiera's native interface is written in GTK+, although other interfaces will be possible. Cinelerra-GG Infinity Cinelerra-GG, a separate variant of Cinelerra by William Morrow and Phyllis Smith, started with merged code from Cinelerra-HV and Cinelerra-CV but quickly grew. It was first developed within Cinelerra.org (Cinelerra 4.6-mod, Cinelerra 5.0), then within the site of Cinelerra-cv.org (Cinelerra 5.0, Cinelerra 5.1, Cinelerra GG 5.1), and since December 2018 with its own website cinelerra-gg.org as Cinelerra-GG Infinity. William died on 11/2020 during a bicycle ride, but Phyllis continues to work on Cinelerra-GG. A new developer is now making frequent updates. An important issue is that Cinelerra-GG reduces reliance on system libraries by including them where practical, like ffmpeg and OpenEXR. This makes it more predictable on different platforms, and also allows it to pick up new versions before the platform does. Cinelerra-GG is determined to get as close as possible to what can be expected from professional level video editing software (NLE) for the Linux platform. Its software features include support for recent versions of ffmpeg, extensive color correction tools, Ultra HD up to 8K, more than 400 video- and audio effects, two interfaces for audio plug-ins (LADSPA, and LV2 such as Calf Studio Gear), multiple denoisers and motion stabilizers, multi-camera editing, proxies, smart folders media filtering, 8, 10 and 12 bit color spaces, advanced trim, live preview of resources, shared tracks, group edits, horizontal and/or vertical timeline split, rendering pre-configuration options, and the ability to save workspace layouts. It supports over 400 video/picture formats for decoding, and over 140 for encoding, including Apple ProRes, AV1, and WEBP. It has a ¨Sketcher" plug-in for free-hand drawing, supports creating HD Blu-ray, and DVDs, and some OpenCV plugins like FindObj. It allows nested clips, and clip sharing between projects ("file-by-reference"). Its hardware support is for jog-wheels ShuttlePRO V.2 and ShuttleXpress from Contour Design, multiple monitors, HiDPI, and hardware-supported decoding/encoding via VAAPI/VDPAU/CUDA. Like the other Cinelerra variants (except Lumiera) Cinelerra-GG uses its own GUI. It has eleven GUI themes to cater to user preferences. The GG variant is under active development, with regular stable releases. It is supplied as an 64 or 32 bit AppImage for Linux. The source code is available as (manual) monthly download or from its git. Before 2021, it was supplied as a multi user program pre-packaged for eight different Linux distributions (Ubuntu, Debian, Arch, OpenSuse, Slackware, Fedora, Centos, Mint), and FreeBSD. When the applicable repository was added to a distribution's update manager, the monthly updates would appear automatically. In addition, there were single-user builds for the eight Linux distributions plus Gentoo, as tar files. All those builds are available in 64-bit, for Debian 9, Slackware and Ubuntu 14 there are also 32-bit single user builds. As a proof-of-concept, with the 2020-01 release was a Windows version with limited functionality; for details see the manual's chapter 1. In addition to the GG variant's monthly releases, it is also available in two Linux variants dedicated to multimedia : AVLinux, and Bodhi Linux Media. It is also included in DeLinuxCo, and in lightweight Elive, of which the 32 bit version is usable on older computers. Cinelerra-GG communicates with it users and developers through three platforms: its forum (user oriented), a bug tracker (feature requests, bugs, roadmap), and a mailing list (developers discussions). Each monthly release has a significant number of changes resulting from discussions and exchanges of information on these platforms. Cinelerra-GG has an extensive, actively maintained manual in both PDF and HTML form, which is also accessible from within the Cinelerra-GG program. The manual is helpful for both beginners (e.g. Quickstart section) and professionals. There is also a YouTube channel with tutorials. The differences between the GG and other Cinelerra variants can be found in . History of Cinelerra versions Events from the original creator Heroine Virtual have been indicated with HV, those of the "community version" with CV, and those of the GG Infinity variant with GG. See also List of video editing software Comparison of video editing software References Further reading External links Differences between Cinelerra versions Cinelerra.org website managed by Michael Collins Cinelerra-HV website Cinelerra-HV source (2019) and bug tracker Cinelerra-HV manual "Secrets of Cinelerra" (2009) Cinelerra-CV website Cinelerra-CV source Cinelerra-CV manual Cinelerra-CV mailing list Cinelerra-GG website Cinelerra-GG source (2020) Cinelerra-GG manual (2020) Free video software Linux-only free software Video software Linux audio video-related software Video editing software Video editing software that uses GTK Software that uses FFmpeg
2389527
https://en.wikipedia.org/wiki/DOS%20XL
DOS XL
DOS XL is a discontinued Disk Operating System (DOS) written by Paul Laughton, Mark Rose, Bill Wilkinson, and Mike Peters and published by Optimized Systems Software (OSS) for the Atari 8-bit family of home computers. It was designed to be compatible with Atari DOS which came shipped with Atari, Inc.'s disk drives, which had also been written by the same team. Description Features A direct descendant of OS/A+, DOS XL provided additional features to Atari's equipped with floppy disk drives. These included single and double density support, a command-line mode (called the command processor or CP), a menu mode (an executable loaded in memory), batch file support, and support for XL extended memory and OSS SuperCartridge banked memory. Later versions included Axlon RamDisk support, Mosaic RamDisk support, BIT-3 support and BUG/65. In addition to supporting auto-booting AUTORUN.SYS files, DOS XL's batch features provided an auto-booting batch feature. Naming a batch file to STARTUP.EXC would have it execute batch commands on startup (same as OS/A+). Unfortunately, this feature wasn't compatible with some programs (e.g. AtariWriter). Distribution DOS XL was distributed on a flippy disk. One side had the single-density version, the other had the double-density version. As more features and add-ons were included, these were placed on the double-density side only due to lack of disk space. The manual for DOS XL was a subset of OS/A+. OSS considered the manual an "addendum" to OS/A+. Over 150+ pages, it was bound at the spine, not in loose-leaf form like the OS/A+ manual. DOS XL came in two versions, 2.20 and 2.30 (2.20 users had to pay $20 to upgrade to 2.30). The last version was 2.30P. DOS XL originally sold for $30, but the price later increased to $39. File writing verify was turned off in DOS XL. This was due to OSS's own experience that resulted in faster writes with virtually no risk of errors. The command file VERIFY.COM was included to reenable writes with verify. Atari DOS 2.0S by default verified file writes to disk. Several disk drive manufacturers shipped DOS XL with their drives: Trak, Percom, Astra, Indus, Amdek, and Rana. Commands Menu Commands C - Copy Files P - Protect Files D - Duplicate Disk Q - Quit to DOS XL E - Erase Files R - Rename File F - Files on Disk S - Save Binary G - Go to Address T - To Cartridge I - Initialize Disk U - Unprotect Files L - Load Binary X - Xtended Command Intrinsic Commands @ - Begins execution of a batch file CAR - Runs Cartridge Dn: - Changes default drive DIR - Directory END - Tells DOS XL to stop batch execution (used in a batch file) ERA - Erase file(s) LOA - Load file(s) in memory NOS - NO Screen. Turns off command echo to screen PRO - Protect. Enables write-protect on file(s) REM - REMark. Used for user remarks in batch files REN - REName. Renames file(s) RUN - Jumps to run address SAV - Saves a portion of memory to disk SCR - Enables commands to be echoed to screen (for batch files) TYP - Displays ASCII or ATASCII text files to screen UNP - Turns off write-protect on file(s) Extrinsic Commands C65 CLRDSK CONFIG COPY DO DUPDBL DUPDSK INIT INITDBL MAC65 MENU NOVERIFY RS232 RS232FIX SDCOPY VERIFY BASIC Extended Commands CLOSE ENTER GET INPUT LIST LOAD NOTE OPEN POINT PRINT PUT SAVE PROTECT (XIO 35) UNPROTECT (XIO 36) RENAME (XIO 32) ERASE (XIO 33) Patch Version 2.30p was a newer version to 2.30 to fix two problems. In previous versions of DOS XL, if you initialized a disk from the menu, the disk would not boot unless the file MENU.COM was on the disk. To alleviate this problem, type Q to quit the menu. Then type INIT or INITDBL from the command processor. Note: everything on the menu can be done manually from the command processor. If you have a multidrive system and you initialized a disk in a drive other than one, when booted, the disk will always come up with the number of the drive on which it was initialized. To prevent this problem, use D1: as the destination drive. The patch for DOS XL 2.30 to make it a 2.30p: OSS Disk Newsletter Fall 1986 Product News & Info DOS XL - New-found Bugs DOS XL Bugs and Fixes BUG: The patch to convert version 2.30 to version 2.30p in our Spring 1984 newsletter didn't work. FIX: Run the following program and then use INIT with the "Write DOS.SYS Only" option to write out the patched DOS. Make sure that you don't have DOSXL.SYS (either .SUP or .XL version) active when you do this. 100 READ CNT:IF CNT=0 THEN END 110 READ START 120 FOR ADDR=START TO START+CNT-1 130 READ BYTE:POKE ADDR,BYTE 135 NEXT ADDR 140 GOTO 100 150 DATA 3,5481,32,1,21 160 DATA 29,5377,141,217,22,169,16 170 DATA 141,23,22,169,23,141,24,22 180 DATA 169,49,141,30,22,169,64,141 190 DATA 12,0,169,21,141,13,0,96 200 DATA 1,7425,112,0 BUG: INIT does not work if you use drive numbers 4 through 8. FIX: Run the following program: 10 OPEN #1,12,0,"D:INIT.COM" 20 FOR I=1 TO 116 : GET #1,C : NEXT I 30 PUT #1,ASC("9") : CLOSE #1 Legacy Plans for DOS XL 4 were scrapped due to low demand and competition. OSS reissued OS/A+ 4.1, mainly for double-sided support. DOS XL, along with other OSS products, became part of ICD's catalog of Atari products in January 1988. However, DOS XL was dropped in favor of ICD's SpartaDOS. References Inside Atari DOS Introduction by Bill Wilkinson, 1982 A User's Guide and Reference Manual for DOS XL 2.30, 1983 OSS Newsletter - Spring 1984 OSS Newsletter - October 1984 External links Antic Vol.4 No.3 Everything You Wanted To Know About Every DOS Optimized Systems Software Atari 8-bit family software Atari operating systems Disk operating systems
11525694
https://en.wikipedia.org/wiki/World%20Information%20Technology%20and%20Services%20Alliance
World Information Technology and Services Alliance
The World Information Technology and Services Alliance (WITSA) is a consortium of associations from the information and communications technology (ICT) industry around the world. The group claims that it represents over 90% of the world ICT market through its global membership, and has a goal of advancing the growth and development of the ICT industry. WITSA was founded in 1978 as the World Computing Services Industry Association, and participates in advocacy in international public policy that affects the "global information infrastructure". It voices the concerns of the international IT industry in organisations such as the World Trade Organization, the Organisation for Economic Co-operation and Development, and the G8. WITSA represents IT industry associations in over 84 countries or economies. WITSA's motto is "Fulfilling the Promise of the Digital Age". WITSA World Information Technology and Services Alliance re-elected the 2020-2022 chairman and board of directors on the 18th, and Yannis Sirros, a member of the Greek Federation of Information Technology and Communication Enterprises (SEPE), was elected as the new chairman World Congress on IT (WCIT) 2021, Dhaka, Bangladesh 2020, Penang, Malaysia 2019, Yerevan, Armenia 2018, Hyderabad, India 2017, Taipei, Taiwan 2016, Brasilia, Brazil 2014, Guadalajara, Mexico 2012, Montreal, Canada 2010, Amsterdam, The Netherlands 2008, Kuala Lumpur, Malaysia 2006, Austin, Texas, USA 2004, Athens, Greece 2002, Adelaide, Australia 2000, Taipei, Taiwan 1998, Fairfax, Virginia, USA 1996, Bilbao, Spain 1994, Yokohama, Japan 1992, London, England 1990, Washington D.C., USA 1988, Paris, France 1986, Toronto, Canada 1984, Tokyo, Japan 1982, Copenhagen, Denmark 1980, San Francisco, California, USA 1978, Barcelona, Spain For additional information regarding the WCIT, see the "WCIT - A Proud History" document [PDF] Other programs and events WITSA Global ICT Excellence Awards The WITSA Global ICT Excellence Awards honor achievements in the application of information technology around the globe. Winners have exhibited excellence in one of four categories: Public Sector Excellence, Private Sector Excellence, Sustainable Growth and Digital Opportunity. A Chairman's Award is presented to a nominee selected from the entire pool of candidates from all three awards categories. The award ceremonies are a signature event of the proceedings at the World Congress on Information Technology. Global Policy and Trade Summit (GPATS) WITSA's Global Policy and Trade Summit (GPATS) debates public policy issues including electronic commerce, privacy, education, and ICT in developing countries. The conferences are held every two years, on alternate years from the World Congress on Information Technology. The following is a list of past events: Buenos Aires, Argentina (1999) Kuala Lumpur, Malaysia (2005) Cairo, Egypt (2007) Hamilton, Bermuda (2009) Guadalajara, Mexico (2011) São Paulo, Brazil (November 12–13, 2013) See also Information and communications technology WCIT 2019 Health information technology Information technology References [WITSA Elects New Leadership for the 2020-22 Term: Selects Mr. Yannis Sirros as its First Chairman from Europe https://witsa.org/witsa-elects-new-leadership-for-the-2020-22-term-selects-mr-yannis-sirros-as-its-first-chairman-from-europe/] [Greek Scientist Elected Chairman of Global IT Flagship Organization https://greece.greekreporter.com/2020/11/18/greek-scientist-appointed-chairman-of-global-it-flagship-organization/] External links Official website of WITSA WCIT INDIA 2018 WITSA Global ICT awards Winners of the 2006 Global IT Excellence awards Photos WITSA white papers WITSA presentations WITSA newsletter 2010 World Congress on IT; Amsterdam, The Netherlands, 2010 2008 World Congress on IT; Kuala Lumpur, Malaysia, May 2008 Organizations established in 1978 International professional associations Information technology lobbying organizations International organizations based in the United States 1978 establishments in the United States
2628581
https://en.wikipedia.org/wiki/Open%20Enterprise%20Server
Open Enterprise Server
Open Enterprise Server (OES) is a server operating system published by Novell in March 2005 to succeed their NetWare product. Unlike NetWare, Novell OES is a Linux distribution—specifically, one based on SUSE Linux Enterprise Server. The first major release of Open Enterprise Server (OES 1) could run either with a Linux kernel (with a NetWare compatibility layer) or Novell's NetWare kernel (with a Linux compatibility layer). Novell discontinued the NetWare kernel prior to the release of OES 2, but NetWare 6.5 SP7, and later SP8 can run as a paravirtualized guest inside the Xen hypervisor (Officially supported until 7 March 2012, Novell self-supported until 7 March 2015). OES 1 and OES 2 Novell released OES 1, the first version of OES, on 25 March 2005. Since some users wanted backward compatibility with NetWare, Novell offered two installation options: OES-NetWare and OES-Linux. These are two different operating systems with different kernels and different userlands. OES-NetWare is NetWare v6.5 equipped with NetWare Loadable Modules for various Novell services (such as NetWare Core Protocol, Novell eDirectory, Novell Storage Services, and iPrint) and open-source software (such as OpenSSH, Apache Tomcat, and the Apache HTTP Server). OES-Linux is based on the SUSE Linux Enterprise Server (SLES) with added NetWare services ported to the Linux kernel: e.g. the NetWare Core Protocol, Novell eDirectory, Novell Storage Services, and iPrint. Novell released OES 2, the second version of OES, on 12 October 2007, it was the first SLES-Linux-kernel-only OES, but it retained the OES-NetWare operating system option, as NetWare 6.5 SP7, can run as a paravirtualized guest inside the Xen hypervisor. The SLES base of the OES 2 was updated to SLES 10 SP1. Features introduced in OES 2 include: 32-bit system or 64-bit system supporting 64 bit and 32 bit applications Hardware virtualization Dynamic Storage Technology, which provides Novell Shadow Volumes Windows domain services (from OES 2 SP1) Apple Filing Protocol (AFP) with Cross Protocoll Locking AFP-NCP-Samba (CPL) (from OES 2 SP1) See also (based on different OES editions): Novell Open Workgroup Suite Novell Open Workgroup Suite Small Business Edition OES 11 OES 11 was released on 12 December 2011 based on SLES 11 SP1 64-bit (From OES 2 no NetWare kernel any more). This is the first version of OES to be 64-bit (x86_64) only. NetWare 6.5 SP8 was still possible to run as a 32-bit only paravirtualized guest inside the Xen hypervisor. Introduces Novell Kanaka for Mac client Uses Zypper tool to patch up to 100 times faster than OES2 Added Automated / Unattended Upgrades from OES2 New Novell Linux Volume Manager ( NLVM) provides easier storage management Novell released a service pack, OES 11 SP1, on 28 August 2012. Novell released a service pack, OES 11 SP2, on 28 January 2014. Novell released a service pack, OES 11 SP3, on 26 July 2016. OES 2015 OES 2015 was released on 31 August 2015, added new features and improved performance. OES 2015 SP1 was released on 14 June 2016. OES 2018 OES 2018 was released on 28 November 2017, based on SLES 12 SP2. OES 2018 SP1 was released on 7 February 2019, based on SLES 12 SP3. OES 2018 SP2 was released on 5 May 2020, based on SLES 12 SP5. OES 2018 SP3 was released on 15 Jul 2021, based on SLES 12 SP5. Release summary OES, released on 25 March 2005, based on NetWare 6.5 SP3 and SLES 9 SP1. OES SP1, released in September 2005, was based on NetWare 6.5 SP4 and SLES 9 SP2. OES SP2, released in January 2006, was based on NetWare 6.5 SP5 and SLES 9 SP3. OES 2, released on 12 October 2007 based on NetWare 6.5 SP7 and SLES 10 SP1. OES 2 SP1, released on 1 December 2008, based on NetWare 6.5 SP8 and SLES 10 SP2. OES 2 SP2 was released on 11 November 2009, based on SLES 10 SP3. OES 2 SP3 was released on 7 November 2014, based on SLES 10 SP3, SLES 10 SP4 (as of April 15, 2011). OES 11 was released on 12 December 2011, based on SLES 11 SP1 (64-bit only). OES 11 SP1 was released on 28 August 2012, based on SLES 11 SP2. OES 11 SP2 was released on 28 January 2014, based on SLES 11 SP3. OES 11 SP3 was released on 26 July 2016, based on SLES 11 SP4. OES 2015 was released on 31 August 2015., based on SLES 11 SP3. OES 2015 SP1 was released on 14 June 2016, SLES 11 SP4. OES 2018 was released on 28 November 2017, based on SLES 12 SP2. OES 2018 SP1 was released on 7 February 2019, based on SLES 12 SP3. OES 2018 SP2 was released on 5 May 2020, based on SLES 12 SP5. OES 2018 SP3 was released on 15 Jul 2021, based on SLES 12 SP5. End-of-support schedule Components Automatic Client Upgrade (ACU) - automates the upgrade of Novell client software on existing workstations See also SUSE Linux SUSE Linux Enterprise Server References Further reading External links Open Enterprise Server product page 2005 software NetWare Novell operating systems Proprietary software X86-64 Linux distributions Linux distributions
32097
https://en.wikipedia.org/wiki/University%20of%20Utah
University of Utah
The University of Utah (U of U, UofU, or simply The U) is a public research university in Salt Lake City, Utah. It is the flagship institution of the Utah System of Higher Education. The university was established in 1850 as the University of Deseret () by the General Assembly of the provisional State of Deseret, making it Utah's oldest institution of higher education. It received its current name in 1892, four years before Utah attained statehood, and moved to its current location in 1900. As of Fall 2019, there were 24,485 undergraduate students and 8,333 graduate students, for an enrollment total of 32,818, making it the second largest public university in the state after Utah Valley University. Graduate studies include the S.J. Quinney College of Law and the School of Medicine, Utah's first medical school. It is a member of the Association of American Universities (AAU) and is classified among "R1: Doctoral Universities – Very high research activity". According to the National Science Foundation, the university received $552 million in research and development funding in 2018, ranking it 45th in the nation. In addition, the university's Honors College has been reviewed among 100 leading national Honors Colleges in the U.S. The university's health care system includes four hospitals, including the University of Utah Hospital and Huntsman Cancer Institute, along with twelve community clinics and specialty centers such as the Moran Eye Center. The university's athletic teams, the Utes, participate in NCAA Division I athletics (FBS for football) as a member of the Pac-12 Conference. Twenty-two Rhodes Scholars, four Nobel Prize winners, three Turing Award winners, eight MacArthur Fellows, various Pulitzer Prize winners, two astronauts, Gates Cambridge Scholars, and Churchill Scholars have been affiliated with the university as students, researchers, or faculty members in its history. History Soon after the Mormon pioneers arrived in the Salt Lake valley in 1847, Brigham Young began organizing a Board of Regents to establish a university. The university was established on February 28, 1850, as the University of Deseret by the General Assembly of the provisional State of Deseret, and Orson Spencer was appointed as the first chancellor of the university. Early classes were held in private homes or wherever space could be found. The university closed in 1853 due to lack of funds and lack of feeder schools. Following years of intermittent classes in the Salt Lake City Council House, the university began to be re-established in 1867 under the direction of David O. Calder, who was followed by John R. Park in 1869. The university moved out of the council house into the Union Academy building in 1876 and into Union Square in 1884. In 1892, the school's name was changed to the University of Utah, and John R. Park began arranging to obtain land belonging to the U.S. Army's Fort Douglas on the east bench of the Salt Lake Valley, where the university moved permanently in 1900. Additional Fort Douglas land has been granted to the university over the years, and the fort was officially closed on October 26, 1991. Upon his death in 1900, Dr. John R. Park bequeathed his entire fortune to the university. The university grew rapidly in the early 20th century but was involved in an academic freedom controversy in 1915 when Joseph T. Kingsbury recommended that five faculty members be dismissed after a graduation speaker made a speech critical of Utah governor William Spry. One third of the faculty resigned in protest of these dismissals. Some felt that the dismissals were a result of The Church of Jesus Christ of Latter-day Saints' influence on the university, while others felt that they reflected a more general pattern of repressing religious and political expression that might be deemed offensive. The controversy was largely resolved when Kingsbury resigned in 1916, but university operations were again interrupted by World War I, and later The Great Depression and World War II. Student enrollment dropped to a low of 3,418 during the last year of World War II, but A. Ray Olpin made substantial additions to campus following the war, and enrollment reached 12,000 by the time he retired in 1964. Growth continued in the following decades as the university developed into a research center for fields such as computer science and medicine. During the 2002 Winter Olympics, the university hosted the Olympic Village, a housing complex for the Olympic and Paralympic athletes, as well as the opening and closing ceremonies. Multiple large improvements were made to the university prior to the events, including extensive renovations to the Rice-Eccles Stadium, a light rail line leading to downtown Salt Lake City, a new student center known as the Heritage Center, an array of new student housing, and what is now a 180-room campus hotel and conference center. The University of Utah Asia Campus opened as an international branch campus in the Incheon Global Campus in Songdo, Incheon, South Korea in 2014. Three other European and American universities are also participating. The Asia Campus was funded by the South Korean government. In 2015, the university helped open the Ensign College of Public Health in Kpong, Ghana. In 2019, the university was named a member of the Association of American Universities. Campus Campus takes up , including the Health Sciences complex, Research Park, and Fort Douglas. It is located on the east bench of the Salt Lake Valley, close to the Wasatch Range and approximately 2 miles east of downtown Salt Lake City. Most courses take place on the west side of campus, known as lower campus due to its lower elevation. Presidents Circle is a loop of buildings named after past university presidents with a courtyard in the center. Major libraries on lower campus include the J. Willard Marriott Library and the S.J. Quinney Law Library. The primary student activity center is the A. Ray Olpin University Union, and campus fitness centers include the Health, Physical Education, and Recreation Complex (HPER) and the Eccles Student Life Center. Lower campus is also home to most public venues, such as the Rice-Eccles Stadium, the Jon M. Huntsman Center, and the Utah Museum of Fine Arts, a museum with rotating exhibitions and a permanent collection of American, European, African, and Asian art. Venues for performing arts include Kingsbury Hall, used for touring companies and concerts, Pioneer Memorial Theatre, used by the professional Pioneer Theatre Company, David P. Gardner Hall, used by the School of Music and for musical performances, and the Marriott Center for Dance. Red Butte Garden, with formal gardens and natural areas, as well as the new site of the Utah Museum of Natural History, is located on the far east side of campus. The health sciences complex, at the northeast end of campus, includes the University of Utah Medical Center, Primary Children's Medical Center, the Huntsman Cancer Institute, the Moran Eye Center, and the Spencer S. Eccles Health Sciences Library. South of the health sciences complex, several university residence halls and apartments are clustered together near Fort Douglas and the Heritage Center, which serves as a student center and cafeteria for this area. In addition, there are 1,115 university apartments for students, staff, and faculty across three apartment complexes on campus. At the southeast end of campus is Research Park, which is home to research companies including ARUP Laboratories, Evans & Sutherland, Sarcos, Biofire Diagnostics, and Myriad Genetics. Courses are also held at off-campus centers located in St. George and Sandy. In July 2017, the Academic Senate bestowed the designation of tobacco-free campus on the university, but rules were not enforced until 2018. The rule prohibits students and faculty from "smoking or using chewing tobacco, electronic cigarettes and all other recreational nicotine-delivery products on any property owned, leased or controlled by the University of Utah." Student residences The University of Utah provides student housing in a 34-building housing complex on campus. The complex consists of nine housing areas: Chapel Glen, Gateway Heights, Sage Point, Officer's Circle, Benchmark Plaza, Shoreline Ridge, the Donna Garff Marriott Honors Residential Scholars Community (MHC for short), the Lassonde Studios, and Kahlert Village. The MHC is a dormitory strictly for honors students and was completed in fall 2012. Built in 2016, the Lassonde Studios is part of the Lassonde Entrepreneur Institute and houses 400 students; the studios also feature a "creative garage" with 3D printers and spaces for startups. Kahlert Village, completed August 2020, houses 990 first-year students. Transportation A number of campus shuttles, running on biodiesel and used vegetable oil, circle the campus on six different routes. The Utah Transit Authority (UTA) runs several buses through the university area as well as the TRAX Red Line (light rail), which runs to South Jordan. Riders can travel downtown, to FrontRunner (commuter rail), to West Valley, to the Salt Lake City International Airport, or to Draper by transferring to the TRAX Green or Blue lines. Students and staff can use their university IDs to ride UTA buses, TRAX, and FrontRunner. In 2012, the university unveiled a new plan for a more conducive campus for bicyclers called the "Bicycle Master Plan" which aims to transform the campus into a safer and more accessible place for bicyclers and to promote the increase of bicycle ridership. The plan emphasizes both campus pathways and on-street facilities that connect the core campus area with surrounding neighborhoods. The Bicycle Master Plan gives guidelines for facilities and programs that are within the University's jurisdiction. It also provides recommendations for the University to work with external entities such as UDOT, UTA, and Salt Lake City to improve bicycling conditions in locations that are important to the campus environment, but which are not under the University's direct control. Sustainability Sustainability efforts include a permanent sustainability office, a campus cogeneration plant, building upgrades and energy efficient building standards, behavior modification programs, purchasing local produce for campus dining, a farmers market, and student groups, as well as a branch of the Salt Lake City Bicycle Collective. Sustainability and transportation are also a large part of the university's campus master plan. The Sustainable Endowments Institute gave the university a "B+" in its College Sustainability Report Card 2011, with A's for climate change and energy, food and recycling, student involvement, and transportation. The expanded recycling program launched on July 1, 2007. Since its launch, the program has continued to grow and refine its procedures to better accommodate a growing campus' needs. Currently there are programs in place for paper, cardboard, aluminum, batteries, glass, printer cartridges, wooden pallets and plastics #1 and #2. Renewable energy The university is ranked 8th by the EPA for annual green power usage among universities, with 49% of its power coming from geothermal and solar sources. The university houses 10 solar array systems, including a 330-kilowatt system on the rooftop of the Natural History Museum of Utah and a 262-kilowatt system at the HPER East building. The combined arrays annually produce 1,096,340 kilowatt hours, and are supported by a student fee sustainability program established in 2008. On November 1, 2019, the university entered into a renewable energy partnership with Rocky Mountain Power and Cyrq Energy which allows the purchase of 20 megawatts of geothermal energy for 25 years. The contract offsets half of the electricity produced by the university and reduces the university's carbon emissions by 23%. Organization The University of Utah is governed by a 10-member Board of Trustees, 8 of whom are appointed by the Governor of Utah with the consent of the Utah Senate. The President of the University of Utah Alumni Association serves as the 9th member, and the President of the Associated Students of the University of Utah (ASUU) serves as the 10th member. The 8 appointed members serve for four-year terms, four expiring on June 30 of each odd-numbered year. The two ex officio members serve for the terms of their respective offices. The University of Utah and the other public colleges and universities of the Utah System of Higher Education are governed by the Utah Board of Higher Education (previously the Utah State Board of Regents) whose chief executive officer is the Commissioner of Higher Education. The chief executive officer of the University of Utah is the president, who reports to the Board of Higher Education and, with the approval of the trustees, submits budgets, tuition adjustments, and academic program plans, appoints faculty, and develops policy initiatives. Subject to the Board of Trustees, the university faculty have authority to legislate on matters of educational policy via the Academic Senate. The Senate is composed of 100 faculty members proportionally representing and elected by their respective colleges, 2 elected deans, and 18 students from the ASUU, one from each college and the ASUU president. The Senate also includes the University President, Senior Vice President for Academic Affairs, Senior Vice President for Health Sciences, and all non-elected deans as ex officio members who may debate and present motions but do not vote. Much of the actual Senate work is carried out by 12 Senate-elected committees which work on the central academic issues of the institution. The committees report to the full Senate and the Senate often acts on their proposals as well as on issues brought to its attention by the administration. As of 2014, the university's revenues totaled $3.6 billion, of which 44% comes from patient care, 19% from sales and services, 10% from grants and contracts, 8.2% from tuition and fees, 7.7% from state appropriations, 3.4% from auxiliary enterprises, and the rest from other revenues. Football income was $56 million in 2017-2018. Academics and rankings The University of Utah is a public flagship four-year research university accredited through the Northwest Commission on Colleges and Universities since 1933. The U organizes its 150 academic departments and programs into 17 colleges and schools. The School for Cultural and Social Transformation is the university's newest college, with its first graduating class in 2018. College of Architecture and Planning David Eccles School of Business School for Cultural and Social Transformation School of Dentistry College of Education College of Engineering College of Fine Arts College of Health University of Utah Honors College College of Humanities S.J. Quinney College of Law College of Mines and Earth Sciences College of Nursing College of Pharmacy College of Science College of Social and Behavioral Science College of Social Work The University operates on a semester calendar with the rest of the Utah higher education system. Undergraduate tuition and fees for 2015–2016 were $8,240 for Utah residents (about 325% the cost of tuition and fees in 2000, $2,534 for 13 credit hours per semester, 2 semesters), and $26,180 for non-residents per 12-credit-hour semester. Admissions and demographics For the Class of 2023 (enrolling Fall 2019), Utah received 24,404 applications and accepted 15,159 (62%), with 4,249 enrolling. The middle 50% range of SAT scores for enrolling freshmen was 572.5-680 for evidence-based reading and writing, 570-700 for math, and 1150-1370 for the composite. The middle 50% ACT score range was 22-28 for math, 21-31 for English, and 22-29 for the composite. The average high school grade point average (GPA) was 3.66. The university uses a holistic admissions process and weighs ACT/SAT standardized test scores, GPA, grade trend, rigorous AP/IB/Honors classes taken in high school, academic achievements, along with other "personal achievements and characteristics". In Fall 2015, the undergraduate and graduate student body was 31,551, with 23,794 undergraduate students and 7,757 graduate students; 73% of students were full-time, 56% were male and 44% female, and 82% were Utah residents. The undergraduate student body was 69% white, 11% Hispanic, 6% non-resident alien, 5% Asian, 4% two or more races, 1% Native Hawaiian or Pacific Islander, 1% black, and 1% Native American. Ethnicity or citizenship was unknown for 2%. Notable programs Ballet The Department of Ballet offers the top ranked ballet and ballroom dance program in the United States and is one of the oldest and most reputable university ballet departments in the country. The Department was founded by William F. Christensen in 1951, who also founded the San Francisco Ballet and Ballet West companies. Biology The university has made unique contributions to the study of genetics due in part to long-term genealogy efforts of the LDS Church, which has allowed researchers to trace genetic disorders through several generations. The relative homogeneity of Utah's population also makes it an ideal laboratory for studies of population genetics. The university is home to the Genetic Science Learning Center, a resource which educates the public about genetics through its website. Computer Science The University of Utah was one of the original four nodes of ARPANET, the world's first packet-switched network and embryo of the current worldwide Internet. The School of Computing produced many of the early pioneers in computer science and graphics, including Turing Award winner Alan Kay, Pixar founder Ed Catmull, Atari founder Nolan Bushnell, and Adobe founder John Warnock. Notable innovations of computer science faculty and alumni include the first method for representing surface textures in graphical images, the Gouraud shading model, magnetic ink printing technology, the Johnson counter logic circuit, the oldest algebraic mathematics package still in use (REDUCE), the Phong reflection model, the Phong shading method, and the rendering equation. Through the movement of Utah graduates and faculty, research at the University spread outward to laboratories like Xerox Parc, JPL, and the New York Institute of Technology. Present graphics research is focused on biomedical applications for visualization, scientific computing, and image analysis at the Scientific Computing and Imaging Institute. Dentistry In March 2012, the university received unanimous approval from the board of trustees to create a new academic college, the School of Dentistry, which is the university's first new college in sixty years. The new school has received funding for a new structure and has started as a debt-free program. The new school enrolled its first students for the fall semester of 2013 and averages the same cost as the university's medical school tuition. Law The S.J. Quinney College of Law, founded in 1913, was the only law school in Utah until the 1970s. Medicine The University of Utah has the only accredited allopathic medical school in the State of Utah. The medical school has made several notable contributions to medicine, such as establishing the first Cerebrovascular Disease Unit west of the Mississippi River in 1970 and administering the world's first permanent artificial heart, the Jarvik-7, to Barney Clark in 1982. Pharmacology The University of Utah College of Pharmacy is 4th in the nation for NIH research grants. The department of Pharmacology and Toxicology within the School of Pharmacy is world-renowned for research in epilepsy treatment with their Anticonvulsant Drug Development (ADD) program. Political Science The university is host to the Neal A. Maxwell Lecture Series in Political Theory and Contemporary Politics, a forum for political theorists to share their newest theoretical work, and is home to the Hinckley Institute of Politics, which places more than 350 students every year in local, state, national, and global internships. Turkish Studies Program The university's Turkish Studies Program, funded by Turkish Coalition of America and headed by M. Hakan Yavuz, has been criticized for promoting Armenian genocide denial. Professor Keith David Watenpaugh charges the program with "promoting the falsification of history through its grants and political advocacy... the University of Utah has provided an institutional home to genocide denial." Athletics The university has 9 men's and 11 women's varsity teams. Athletic teams include men's baseball, basketball, football, golf, hockey, lacrosse, skiing, swimming/diving, and tennis and women's basketball, cross country, gymnastics, skiing, soccer, softball, swimming/diving, tennis, track and field, and volleyball. The school's sports teams are called the Utes, though some teams have an additional nickname, such as "Runnin' Utes" for the men's basketball team. The university participates in the NCAA's Division I (FBS for football) as part of the Pac-12 Conference. When they were in the same conference, there was a fierce BYU–Utah rivalry, and the Utah–BYU football game, traditionally the season finale, has been called the "Holy War" by national broadcasting commentators. The university fight song is "Utah Man", commonly played at athletic games and other university events. In 1996, Swoop was introduced as the new mascot of the University of Utah. Because of relationships with the local Ute Indians, Utah adopted a new mascot. While still known as the Utes, Utah is now represented by the Red-tailed Hawk known for the use of his tail feathers in Ute head-dresses, and said he "Reflects the soaring spirit of our state and school" In 2002, the university was one of 20 schools to make the U.S. News & World Report College Sports Honor Roll. In 2005, Utah became the first school to produce No. 1 overall draft picks in both the NFL draft and NBA draft for the same year. Alex Smith was picked first overall by the San Francisco 49ers in the 2005 NFL Draft, and Andrew Bogut was picked first overall by the Milwaukee Bucks in the 2005 NBA Draft. The university has won thirteen NCAA Skiing Championships, most recently in 2019, as well as the 1977 AIAW National Women's Skiing Championship. Men's basketball The men's basketball team won the NCAA title in 1944 and the NIT crown in 1947. Arnie Ferrin, the only four-time All-American in Utah basketball history, played for both the 1944 and 1947 teams. He also went on to help the Minneapolis Lakers win NBA Championships in 1949 and 1951. Wat Misaka, the first person of Asian descent to play in the NBA, also played for Utah during this era. Utah basketball rose again to national prominence when head coach Rick Majerus took his team, including guard Andre Miller, combo forward Hanno Möttölä, and post player Michael Doleac, to the NCAA Final Four in 1998. After eliminating North Carolina to advance to the final round, Utah lost the championship game to Kentucky, 78–69. Football In 2004–2005, the football team, coached by Urban Meyer and quarterbacked by Alex Smith, along with defensive great Eric Weddle, went 11–0 during the regular season and defeated Pittsburgh 35–7 in the 2005 Fiesta Bowl, becoming the first team from a conference without an automatic Bowl Championship Series (BCS) bid to go to a BCS bowl game. The team ended its perfect 12–0 season ranked 4th in AP polling. 2008–2009 was another undefeated year for the football team, coached by Kyle Whittingham, as they finished the season 13–0 and defeated Alabama 31–17 in the 2009 Sugar Bowl. Utah finished the season 2nd in AP polling, their highest rank ever. At the end of the season, the Utes were the only unbeaten team in the country, with the nation's longest active streak of bowl victories (8). The Utah Utes moved to the Pac-12 Conference for the start of the 2011–2012 football season. They are in the South Division with University of Colorado, University of Arizona, Arizona State University, UCLA and University of Southern California. Their first game in the Pac-12 was at USC on September 10, 2011, and resulted in a 23–14 Utah loss. Gymnastics The women's gymnastics team, coached by Megan Marsden, has won ten national championships, including the 1981 AIAW championship, and placed 2nd nationally eight times. As of 2013, it has qualified for the NCAA championship every year since 1976, the only program to do so. The program has averaged over 11,000 fans per meet 1992–2010 and has been the NCAA gymnastics season attendance champions 16 of these 19 years. In 2010, there was an average of 14,213 fans per meet, the largest crowd being 15,030. Marching band The university marching band, known as the "Pride of Utah", perform at all home football games, as well as some away games and bowl games. They performed at the 2005 BCS Tostitos Fiesta Bowl, the 2009 BCS Allstate Sugar Bowl, and the Inaugural Parade of President Barack Obama. The band began as a military band in the 1940s. In 1948, university president A. Ray Olpin recruited Ron Gregory from Ohio State University to form a collegiate marching band. Support for the band dwindled in the 60s, and ASUU (the Associated Students of the University of Utah) discontinued its funding in 1969. The band was revived in 1976 after a fund raising effort under the direction of Gregg I. Hanson. As of 2011, the band is under the direction of Dr. Brian Sproul. Student life Close to 50% of freshmen live on campus, but most students choose to live elsewhere after their first year, with 13% of all undergraduates living on campus. The university is located in a large metropolitan area, but many students live in the neighborhoods immediately surrounding the university. An additional 1,115 family apartments are available to students, staff, and faculty. One of the university's primary four goals for long-term campus growth is to increase student engagement through the addition of on-campus housing, intramural fields, athletic centers, and a new student activity center. The current student activity center, the A. Ray Olpin University Union, is a common gathering place for university-wide events such as Crimson Nights, roughly monthly student activity nights; PlazaFest, a fair for campus groups at the start of the school year; and the Grand Kerfuffle, a concert at the end of the school year. The building includes a cafeteria, computer lab, recreational facilities, and a ballroom for special events. The Union also houses the Lowell Bennion Community Service Center, CESA (Center for Ethnic Student Affairs) which provides an inclusive space for students and houses various advising programs of the Office of Equity and Diversity, the Union Programming Council which is in charge of promoting student life on campus through events like Crimson Nights, and ASUU (the Associated Students of the University of Utah), which is responsible for appropriating funds to student groups and organizations on campus. ASUU holds primary and general elections each year for student representatives, typically with 10–15% of the student population voting. Due to the large number of LDS Church members at the university, there is an LDS Institute of Religion building near main campus, as well as several LDS student groups and 46 campus wards. Approximately 650 students are part of 6 sororities and 8 fraternities at the university, most of which have chapter houses on "Greek Row" just off campus. The University of Utah has a dry campus, meaning that alcohol is banned on campus. In 2004, Utah became the first state with a law expressly permitting concealed weapons on public university campuses. The University of Utah tried to uphold its gun ban but the Utah Supreme Court rejected the ban in 2006. Media The university has several public broadcasting affiliations, many of which utilize the Eccles Broadcast Center. These stations include PBS Utah (formerly branded as KUED channel 7), a PBS member station and producer of local documentaries; KUEN channel 9, an educational station for teachers and students from the Utah Education Network; KUER 90.1 FM, a public radio affiliate of National Public Radio, American Public Media, and Public Radio International; and K-UTE 1620. NewsBreak is the student-run television newscast on campus. During 2011, the program celebrated its 40th anniversary. Broadcasts air every Thursday night at 10 pm during the fall and spring semesters on KUEN. The Daily Utah Chronicle, also referred to as the Chrony, has been the university's independent, student-run paper since 1890. It publishes daily on school days during fall and spring semesters and weekly during summer semester. The paper typically runs between eight and twelve pages, with longer editions for weekend game guides. The paper converted to a broadsheet format in 2003 when the Newspaper Agency Corporation began printing it. The Society of Professional Journalists selected the newspaper as one of three finalists for best all-around daily student newspaper in the nation in both 2007 and 2008. Staff from the Chronicle feed into Utah journalism circles, some of them rising to considerable prominence, such as former editor Matt Canham, whose work with The Salt Lake Tribune earned him the Don Baker Investigative Reporting Award from the Utah Chapter of the Society of Professional Journalists. The University of Utah Press, the oldest press in Utah and now part of the J. Willard Marriott Library, publishes books on topics including the outdoors, anthropology and archaeology, linguistics, creative nonfiction, Mesoamerica, Native American studies, and Utah, Mormon, and Western history. The Wallace Stegner Prize in American Environmental or Western History is presented annually by the press. Its Utah Series in Middle East Studies has been criticized for "specializing" in "methodologically flawed accounts" of the Armenian genocide that seek to reject the term genocide as being applicable to the event, and includes works by Guenter Lewy, Justin McCarthy, and Yücel Güçlü. The university is also home to a national literary journal, Quarterly West. Notable alumni and faculty Notable alumni include politicians Rocky Anderson, Bob Bennett, Marsha K. Caddle, Merrill Cook, E. Jake Garn, Jon Huntsman, Jr., Karen Morgan, Frank E. Moss, Joshua Rush, and Karl Rove; recent LDS Church presidents Gordon B. Hinckley and Thomas S. Monson; historian and Pulitzer Prize for History laureate Laurel Thatcher Ulrich; authors Orson Scott Card, Stephen Covey, Shannon Hale, Everett Errol Murdock, Terry Tempest Williams, and Wallace Stegner; R Adams Cowley, William DeVries, Russell M. Nelson, and Robert Jarvik in medicine; historian Richard Foltz; educators Gordon Gee, Jonathan Westover, and Ann Weaver Hart; reporter Martha Raddatz;, writer and canoeist Neal Moore and speed reading innovator Evelyn Nielsen Wood. Notable science and engineering alumni include Jim Blinn; Mark W. Fuller, CEO of WET Design; Jim Clark, founder of Silicon Graphics, Netscape Communications Corporation, myCFO, and Healtheon; Gretchen W. McClain, former NASA Deputy Associate Administrator of Human Space Exploration and Chief Director of the International Space Station; Henri Gouraud; John C. Cook who played a crucial role in establishing the field of ground-penetrating radar; Ralph Hartley; rocket scientist Joseph Majdalani; Alan Kay; Simon Ramo; and John Warnock, co-founder of Adobe Systems. Notable entrepreneur and business leader alumni include Alan Ashton, co-founder of WordPerfect and Thanksgiving Point; Freestyle Skiër Tom Wallisch; Nolan Bushnell, founder of Atari and Chuck E. Cheese; Ed Catmull, co-founder of Pixar; J. Willard Marriott, founder of Marriott International; Robert A. "Bob" McDonald, CEO of Procter & Gamble; David Neeleman, founder of JetBlue; and Telle Whitney, CEO and President of the Anita Borg Institute In athletics, notable alumni include baseball player Chris Shelton; basketball players Andrew Bogut, Kyle Kuzma, Andre Miller, and Keith Van Horn; football players Paul Kruger, Star Lotulelei, Jamal Anderson, Kevin Dyson, Eric Weddle, Alex Smith, and Steve Smith Sr.; hall of fame karate grandmaster Dan Hausel; and football coach LaVell Edwards. Notable members of the athletics faculty include Sharrieff Shah, coach of the University of Utah football team and husband to Jen Shah, cast member of the Real Housewives of Salt Lake City. Notable alumni also includes serial killer Ted Bundy, who briefly attended the College of Law before dropping out. Drag Queen Denali Foxx graduated from the University of Utah. Notable faculty in science and engineering include David Evans and Ivan Sutherland, founders of Evans and Sutherland; Bui Tuong Phong, pioneer of computer graphics; Henry Eyring, known for studying chemical reaction rates; Stephen Jacobsen, founder of Sarcos; Jindřich Kopeček and Sung Wan Kim, pioneers of polymeric drug delivery and gene delivery; Suhas Patil, founder of Cirrus Logic; Stanley Pons, who claimed to have discovered "cold fusion" in 1989; Venkatraman Ramakrishnan, later co-winner of the 2009 Nobel Prize in Chemistry; Thomas Stockham, founder of Soundstream; and David W. Grainger, Chair of the Department of Biomedical Engineering, winner of the “Excellence in Pharmaceutics” Award, and alumnus. In medicine, notable faculty include Mario Capecchi, the co-winner of the 2007 Nobel Prize in Physiology or Medicine; Willem Johan Kolff; and Russell M. Nelson. Biologist Ralph Vary Chamberlin, founding dean of the Medical School, professor, and later historian of the University, was also an alumnus. References External links The University of Utah Athletics website University of Utah Utah, University of Flagship universities in the United States Economy of Salt Lake City Education in Salt Lake City Universities and colleges accredited by the Northwest Commission on Colleges and Universities Buildings and structures in Salt Lake City Universities and colleges in Salt Lake County, Utah Tourist attractions in Salt Lake City 1850 establishments in Utah Territory
5143530
https://en.wikipedia.org/wiki/IEEE%201471
IEEE 1471
IEEE 1471 is a superseded IEEE standard for describing the architecture of a "software-intensive system", also known as software architecture. In 2011 it was superseded by ISO/IEC/IEEE 42010, Systems and software engineering — Architecture description. Overview IEEE 1471 is the short name for a standard formally known as ANSI/IEEE 1471-2000, Recommended Practice for Architecture Description of Software-Intensive Systems. Within Institute of Electrical and Electronics Engineers (IEEE) parlance, this is a "recommended practice", the least normative of its standards. In 2007 this standard was adopted by ISO/IEC JTC1/SC7 as ISO/IEC 42010:2007, Systems and Software Engineering -- Recommended practice for architectural description of software-intensive systems. It has long been recognized that "architecture" has a strong influence over the life cycle of a system. However, until relatively recently, hardware issues have tended to dominate architectural thinking, and software aspects, when considered at all, were often the first to be compromised under the pressures of development. IEEE 1471 was created to provide a basis for thinking about the architecture of software-intensive systems. IEEE 1471's contributions can be summarised as follows (in this list, items in italics are terms defined by and used in the standard): It provides definitions and a meta-model for the description of architecture It states that an architecture should address a system's stakeholders concerns It asserts that architecture descriptions are inherently multi-view, no single view adequately captures all stakeholder concerns It specifies the notions of view and viewpoint, where a viewpoint identifies the set of concerns and the representations/modeling techniques, etc. used to describe the architecture to address those concerns and a view is the result of applying a viewpoint to a particular system. It establishes content requirements for architecture descriptions and the idea that a conforming architecture description has a 1-to-1 correspondence between its viewpoints and its views. It provides guidance for capturing architecture rationale and identifying inconsistencies/unresolved issues between the views within an architecture description IEEE 1471 provides informative annexes that relate its concepts to architecture concepts in other standards, including RM-ODP and IEEE 12207. History In August 1995, the IEEE Software Engineering Standards Committee (SESC) chartered an IEEE Architecture Planning Group (APG) to set direction for incorporating architectural thinking into IEEE standards. In April 1996, the Architecture Working Group (AWG) was created to implement the recommendations made by APG to the SESC. The AWG was chaired by Basil Sherlund, vice-chairs Ronald Wade, David Emery, the specification was edited by Rich Hilliard. The AWG had 25 members. Drafts of the specification were balloted and commented on by 130 international reviewers. In September 2000, the IEEE-SA Standards Board approved the specification as IEEE Std 1471-2000. In 2006, ISO/IEC Joint Technical Committee 1 (JTC1), Information technology/Subcommittee SC 7, Software and systems engineering, adopted the specification as ISO/IEC 42010, under a special "fast-track procedure", in parallel with its approval by national bodies of ISO and IEC. A coordinated revision of this standard by ISO/IEC JTC1/SC7/WG42 and IEEE CS commenced in 2006, following the successful ISO/IEC fast-track ballot and in line with the IEEE standard 5-year review of the standard. In November 2011, IEEE 1471-2000 and ISO/IEC 42010:2007 was superseded by ISO/IEC/IEEE 42010:2011, Systems and software engineering — Architecture description. Purpose According to IEEE 1471 an architecture description can be used for the following: Expression of the system and its evolution Communication among the system stakeholders Evaluation and comparison of architectures in a consistent manner Planning, managing, and executing the activities of system development Expression of the persistent characteristics and supporting principles of a system to guide acceptable change Verification of a system implementation’s compliance with an architectural description Recording contributions to the body of knowledge of software-intensive systems architecture Terminology According to IEEE Standard Glossary of Software Engineering Terminology the following definitions are used: architect: The person, team, or organization responsible for designing systems architecture. architectural description (AD): A collection of products to document an architecture. architecture: The fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution. designing: The activities of defining, documenting, maintaining, improving, and certifying proper implementation of an architecture. system: A collection of components organized to accomplish a specific function or set of functions. The term system encompasses individual applications, systems in the traditional sense, subsystems, systems of systems, product lines, product families, whole enterprises, and other aggregations of interest. system stakeholder: An individual, team, or organization (or classes thereof) with interests in, or concerns relative to, a system. view: A representation of a whole system from the perspective of a related set of concerns. viewpoint: A specification of the conventions for constructing and using a view. A pattern or template from which to develop individual views by establishing the purposes and audience for a view and the techniques for its creation and analysis. Conceptual framework IEEE 1471 uses the following conceptual framework. A system’s environment, or context, can influence that system. The environment can include other systems that interact with the system of interest, either directly via interfaces or indirectly in other ways. The environment determines the boundaries that define the scope of the system of interest relative to other systems. A system has one or more stakeholders. Each stakeholder typically has interests in, or concerns relative to, that system. Concerns are those interests which pertain to the system’s development, its operation or any other aspects that are critical or otherwise important to one or more stakeholders. Concerns include system considerations such as performance, reliability, security, distribution, and evolvability. A system exists to fulfill one or more missions in its environment. A mission is a use or operation for which a system is intended by one or more stakeholders to meet some set of objectives. Every system has an architecture, whether understood or not; whether recorded or conceptual. An architecture can be recorded by an architectural description. An architectural description is organized into one or more constituents called (architectural) views. Each view addresses one or more of the concerns of the system stakeholders. A view is a partial expression of a system’s architecture with respect to a particular viewpoint. A viewpoint establishes the conventions by which a view is created, depicted and analyzed. In this way, a view conforms to a viewpoint. The viewpoint determines the languages (including notations, model, or product types) to be used to describe the view, and any associated modeling methods or analysis techniques to be applied to these representations of the view. These languages and techniques are used to yield results relevant to the concerns addressed by the viewpoint. An architectural description selects one or more viewpoints for use. The selection of viewpoints is typically based on consideration of the stakeholders to whom the AD is addressed and their concerns. A viewpoint definition may originate with an AD, or it may have been defined elsewhere (a library viewpoint). A view may consist of one or more architectural models. Each such architectural model is developed using the methods established by its associated architectural viewpoint. An architectural model may participate in more than one view. Conformance IEEE 1471 defines a set of normative requirements for conforming architecture descriptions, including the following: AD identification, version, and overview information (clause 5.1) Identification of the system stakeholders and their concerns judged to be relevant to the architecture (clause 5.2) Specifications of each viewpoint that has been selected to organize the representation of the architecture and the rationale for those selections (clause 5.3) One or more architectural views (clause 5.4) A record of all known inconsistencies among the architectural description’s required constituents (clause 5.5) A rationale for selection of the architecture (clause 5.6) See also Software architecture views Enterprise Architecture Framework View model References External links IEEE 1471 website MEGAF is an infrastructure for realizing architecture frameworks that conform to the definition of architecture framework provided in the ISO/IEC 42010 standard. IEEE standards Software architecture
27731754
https://en.wikipedia.org/wiki/LanSlide%20Gaming%20PCs
LanSlide Gaming PCs
LanSlide Gaming PCs is a privately owned, internet-based, gaming computer company located in Schenectady, New York. Specializing in high end portable gaming desktops, all of the company's computers are based around portable designs and marketed towards LAN party enthusiasts. History LanSlide Gaming PCs, LLC was founded in 2005 by a group of gamers tired of moving large gaming desktops to LAN parties. The company focuses on computers designed to be portable and sells a wide range of desktop gaming computers, all of which are built into cases with carrying handles for ease of transport. Each computer comes with a special backpack to hold everything needed to run a desktop computer with the exception of the computer case. In addition to portable computers, LanSlide Gaming PCs also sells a line of computers designed for 3D gaming. Products Small Form Factor Computers: Earthquake - Entry level gaming desktop Avalanche - Mid-range gaming desktop Armageddon - High-end gaming desktop Mid-Tower Computers: Monsoon - Entry level gaming desktop Cyclone - Mid-range gaming desktop Hurricane - High-end gaming desktop 3D Gaming Computers: Sandstorm - Mid-range 3D gaming system Whirlwind - High-end 3D gaming system Free, Public Computer Support Service In August, 2010 LaSlide Gaming PCs launched the "Absolutely Free Computer Help Page", which provides free computer support to the general public, regardless of their status as a customer of the company. The service allows users to submit gaming computer related questions and get answers from live people within approximately 48 hours. While the service is focused on questions about building, buying, and fixing gaming computers, it is completely open ended, allowing users to ask any computer related question and get an answer. PC Pack LaSlide Gaming PCs holds patent pending status on a special backpack intended to make it easier to move a gaming set-up in one trip and avoid dropping or damaging desktop components during transit. The backpack is designed to hold and protect up to a 22″ widescreen monitor, extra-long gaming keyboard, mouse, headphones, surge protector, cables, and other miscellaneous paraphernalia need to run a desktop computer. Noteworthy Public Appearances PaxEast 2010: Mitchell Shuster, one of the founders of the company, appeared on a panel called "The Future of PC Gaming" at the first annual PAX-East convention, alongside John Abercrombie, lead programmer of Irrational Games and Joe Kreiner of Terminal Reality, where he spoke on the future prospects of the gaming PC industry. See also List of Computer System Manufacturers References External links Official LanSlide Gaming PCs Website Home computer hardware companies Computer companies of the United States Companies established in 2005
30402
https://en.wikipedia.org/wiki/Theory%20of%20computation
Theory of computation
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree (e.g., approximate solutions versus precise ones). The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?". In order to perform a rigorous study of computation, computer scientists work with a mathematical abstraction of computers called a model of computation. There are several models in use, but the most commonly examined is the Turing machine. Computer scientists study the Turing machine because it is simple to formulate, can be analyzed and used to prove results, and because it represents what many consider the most powerful possible "reasonable" model of computation (see Church–Turing thesis). It might seem that the potentially infinite memory capacity is an unrealizable attribute, but any decidable problem solved by a Turing machine will always require only a finite amount of memory. So in principle, any problem that can be solved (decided) by a Turing machine can be solved by a computer that has a finite amount of memory. History The theory of computation can be considered the creation of models of all kinds in the field of computer science. Therefore, mathematics and logic are used. In the last century it became an independent academic discipline and was separated from mathematics. Some pioneers of the theory of computation were Ramon Llull, Alonzo Church, Kurt Gödel, Alan Turing, Stephen Kleene, Rózsa Péter, John von Neumann and Claude Shannon. Branches Automata theory Automata theory is the study of abstract machines (or more appropriately, abstract 'mathematical' machines or systems) and the computational problems that can be solved using these machines. These abstract machines are called automata. Automata comes from the Greek word (Αυτόματα) which means that something is doing something by itself. Automata theory is also closely related to formal language theory, as the automata are often classified by the class of formal languages they are able to recognize. An automaton can be a finite representation of a formal language that may be an infinite set. Automata are used as theoretical models for computing machines, and are used for proofs about computability. Formal Language theory Language theory is a branch of mathematics concerned with describing languages as a set of operations over an alphabet. It is closely linked with automata theory, as automata are used to generate and recognize formal languages. There are several classes of formal languages, each allowing more complex language specification than the one before it, i.e. Chomsky hierarchy, and each corresponding to a class of automata which recognizes it. Because automata are used as models for computation, formal languages are the preferred mode of specification for any problem that must be computed. Computability theory Computability theory deals primarily with the question of the extent to which a problem is solvable on a computer. The statement that the halting problem cannot be solved by a Turing machine is one of the most important results in computability theory, as it is an example of a concrete problem that is both easy to formulate and impossible to solve using a Turing machine. Much of computability theory builds on the halting problem result. Another important step in computability theory was Rice's theorem, which states that for all non-trivial properties of partial functions, it is undecidable whether a Turing machine computes a partial function with that property. Computability theory is closely related to the branch of mathematical logic called recursion theory, which removes the restriction of studying only models of computation which are reducible to the Turing model. Many mathematicians and computational theorists who study recursion theory will refer to it as computability theory. Computational complexity theory Complexity theory considers not only whether a problem can be solved at all on a computer, but also how efficiently the problem can be solved. Two major aspects are considered: time complexity and space complexity, which are respectively how many steps does it take to perform a computation, and how much memory is required to perform that computation. In order to analyze how much time and space a given algorithm requires, computer scientists express the time or space required to solve the problem as a function of the size of the input problem. For example, finding a particular number in a long list of numbers becomes harder as the list of numbers grows larger. If we say there are n numbers in the list, then if the list is not sorted or indexed in any way we may have to look at every number in order to find the number we're seeking. We thus say that in order to solve this problem, the computer needs to perform a number of steps that grows linearly in the size of the problem. To simplify this problem, computer scientists have adopted Big O notation, which allows functions to be compared in a way that ensures that particular aspects of a machine's construction do not need to be considered, but rather only the asymptotic behavior as problems become large. So in our previous example, we might say that the problem requires steps to solve. Perhaps the most important open problem in all of computer science is the question of whether a certain broad class of problems denoted NP can be solved efficiently. This is discussed further at Complexity classes P and NP, and P versus NP problem is one of the seven Millennium Prize Problems stated by the Clay Mathematics Institute in 2000. The Official Problem Description was given by Turing Award winner Stephen Cook. Models of computation Aside from a Turing machine, other equivalent (See: Church–Turing thesis) models of computation are in use. Lambda calculus A computation consists of an initial lambda expression (or two if you want to separate the function and its input) plus a finite sequence of lambda terms, each deduced from the preceding term by one application of Beta reduction. Combinatory logic is a concept which has many similarities to -calculus, but also important differences exist (e.g. fixed point combinator Y has normal form in combinatory logic but not in -calculus). Combinatory logic was developed with great ambitions: understanding the nature of paradoxes, making foundations of mathematics more economic (conceptually), eliminating the notion of variables (thus clarifying their role in mathematics). μ-recursive functions a computation consists of a mu-recursive function, i.e. its defining sequence, any input value(s) and a sequence of recursive functions appearing in the defining sequence with inputs and outputs. Thus, if in the defining sequence of a recursive function the functions and appear, then terms of the form 'g(5)=7' or 'h(3,2)=10' might appear. Each entry in this sequence needs to be an application of a basic function or follow from the entries above by using composition, primitive recursion or μ recursion. For instance if , then for 'f(5)=3' to appear, terms like 'g(5)=6' and 'h(5,6)=3' must occur above. The computation terminates only if the final term gives the value of the recursive function applied to the inputs. Markov algorithm a string rewriting system that uses grammar-like rules to operate on strings of symbols. Register machine is a theoretically interesting idealization of a computer. There are several variants. In most of them, each register can hold a natural number (of unlimited size), and the instructions are simple (and few in number), e.g. only decrementation (combined with conditional jump) and incrementation exist (and halting). The lack of the infinite (or dynamically growing) external store (seen at Turing machines) can be understood by replacing its role with Gödel numbering techniques: the fact that each register holds a natural number allows the possibility of representing a complicated thing (e.g. a sequence, or a matrix etc.) by an appropriate huge natural number — unambiguity of both representation and interpretation can be established by number theoretical foundations of these techniques. In addition to the general computational models, some simpler computational models are useful for special, restricted applications. Regular expressions, for example, specify string patterns in many contexts, from office productivity software to programming languages. Another formalism mathematically equivalent to regular expressions, Finite automata are used in circuit design and in some kinds of problem-solving. Context-free grammars specify programming language syntax. Non-deterministic pushdown automata are another formalism equivalent to context-free grammars. Primitive recursive functions are a defined subclass of the recursive functions. Different models of computation have the ability to do different tasks. One way to measure the power of a computational model is to study the class of formal languages that the model can generate; in such a way to the Chomsky hierarchy of languages is obtained. References Further reading Textbooks aimed at computer scientists (There are many textbooks in this area; this list is by necessity incomplete.) Hopcroft, John E., and Jeffrey D. Ullman (2006). Introduction to Automata Theory, Languages, and Computation. 3rd ed Reading, MA: Addison-Wesley. One of the standard references in the field. Hein, James L. (1996) Theory of Computation. Sudbury, MA: Jones & Bartlett. A gentle introduction to the field, appropriate for second-year undergraduate computer science students. Taylor, R. Gregory (1998). Models of Computation and Formal Languages. New York: Oxford University Press. An unusually readable textbook, appropriate for upper-level undergraduates or beginning graduate students. Lewis, F. D. (2007). Essentials of theoretical computer science A textbook covering the topics of formal languages, automata and grammars. The emphasis appears to be on presenting an overview of the results and their applications rather than providing proofs of the results. Martin Davis, Ron Sigal, Elaine J. Weyuker, Computability, complexity, and languages: fundamentals of theoretical computer science, 2nd ed., Academic Press, 1994, . Covers a wider range of topics than most other introductory books, including program semantics and quantification theory. Aimed at graduate students. Books on computability theory from the (wider) mathematical perspective Hartley Rogers, Jr (1987). Theory of Recursive Functions and Effective Computability, MIT Press. . Carl H. Smith, A recursive introduction to the theory of computation, Springer, 1994, . A shorter textbook suitable for graduate students in Computer Science. Historical perspective . External links Theory of Computation at MIT Theory of Computation at Harvard Computability Logic - A theory of interactive computation. The main web source on this subject.
50525886
https://en.wikipedia.org/wiki/Hydrogen%20isotope%20biogeochemistry
Hydrogen isotope biogeochemistry
Hydrogen isotope biogeochemistry is the scientific study of biological, geological, and chemical processes in the environment using the distribution and relative abundance of hydrogen isotopes. There are two stable isotopes of hydrogen, protium 1H and deuterium 2H, which vary in relative abundance on the order of hundreds of permil. The ratio between these two species can be considered the hydrogen isotopic fingerprint of a substance. Understanding isotopic fingerprints and the sources of fractionation that lead to variation between them can be applied to address a diverse array of questions ranging from ecology and hydrology to geochemistry and paleoclimate reconstructions. Since specialized techniques are required to measure natural hydrogen isotope abundance ratios, the field of hydrogen isotope biogeochemistry provides uniquely specialized tools to more traditional fields like ecology and geochemistry. History of hydrogen isotopes Earliest work The study of hydrogen stable isotopes began with the discovery of deuterium by chemist Harold Urey of the famous Urey and Miller experiment. Even though the neutron was not realized until 1932, Urey began searching for "heavy hydrogen" in 1931. Urey and his colleague George Murphy calculated the redshift of heavy hydrogen from the Balmer series and observed very faint lines on a spectrographic study. To intensify the spectroscopic lines for publishable data, Murphy and Urey paired with Ferdinand Brickwedde and distilled a more concentrated pool of heavy hydrogen, known today as deuterium. This work on hydrogen isotopes won Urey the 1934 Nobel Prize in Chemistry. Also in 1934, scientists Ernest Rutherford, Mark Oliphant, and Paul Harteck, produced the radioactive isotope tritium by hitting deuterium with high energy nuclei. The deuterium used in the experiment was a generous gift of heavy water from the Berkeley physicist Gilbert N Lewis. Bombarding deuterium produced two previously undetected isotopes, helium-3 and hydrogen-3. Rutherford and his colleagues successfully created tritium, but incorrectly assumed that helium-3 was the radioactive component. The work of Luis Walter Alvarez and Robert Cornog first isolated tritium and reversed Rutherford's incorrect notion. Alvarez reasoned that tritium was radioactive, but did not measure the half-life, although calculations at the time suggested over ten years. At the end of World War II, the physical chemist Willard Libby detected the residual radioactivity of a tritium sample with a Geiger counter, providing a more accurate understanding of the half-life, now accepted at 12.3 years. Impact on physical chemistry The discovery of hydrogen isotopes also impacted the field of physics in the 1940s, as Nuclear Magnetic Resonance (NMR) spectroscopy was first invented. Today, organic chemists utilize NMR for mapping protein interactions or identifying small compounds, but NMR was first a passion project of physicists. All three isotopes of hydrogen were found to have magnetic properties suitable for NMR spectroscopy. The first chemist to fully express an application of NMR was George Pake, who measured gypsum (CaSO4.2H2O) as a crystal and powder. The signal observed, called the Pake doublet, was from the magnetically active hydrogens in water. Pake then calculated the proton-proton bond distance. NMR measurements were further revolutionized when commercial machines became available in the 1960s. Before this, NMR experiments involved constructing massive projects, locating large magnets, and hand wiring miles of copper coil. Proton NMR remained the most popular technique throughout advancements in following decades, but deuterium and tritium were used in other flavors of NMR spectroscopy. Deuterium has a different magnetic moment and spin than protium, but generally a much smaller signal. Historically, deuterium NMR is a poor alternative to proton NMR, but has been used to study the behavior of lipids on membranes. Recently, a variation of deuterium NMR called 2H-SNIF has shown potential for understating position specific isotope compositions and comprehending biosynthetic pathways. Tritium is also used in NMR, as it is the only nucleus more sensitive than protium, generating very large signals. However, tritium's radioactivity discouraged many studies of T-NMR. While tritium radioactivity discourages use in spectroscopy, the energy from decay is essential for nuclear weapons. Scientists began understanding nuclear energy as early as the 1800s, but large advancements were made in studies of the atomic bomb in the early 1940s. War time research, especially the Manhattan project, greatly accelerated scientific understanding of radioactivity. Tritium is a byproduct in reactors, a result of hitting lithium-6 with neutrons, producing almost 5 MeV of energy. In boosted-fission nuclear weapons a mixture of deuterium and tritium are heated until there is thermonuclear fission to produce helium and release free neutrons. The flurry of fast neutron particles would then excite further fission reactions with uranium, creating a "boosted" nuclear bomb. In 1951, during Operation Greenhouse, a prototype named George, successfully validated the proof of concept for such a weapon. However, the first true boosted fission nuclear device, Greenhouse Item, was successfully tested in 1952, generating 45.5 kilotons of explosive yield, nearly double the value of an un-boosted system. The United States stopped producing tritium in nuclear reactors in 1988, but nuclear weapons testing in the 1950s added large spikes of radioactive elements to the atmosphere, especially radiocarbon and tritium. This complicated measurements for geologists using radiometric dating of carbon. However, some oceanographers benefited from the tritium increase, utilizing the signal in the water to trace physical mixing of water masses. Impact on biogeochemistry In biogeochemistry, scientists focused primarily on the stable isotope of deuterium as a tracer for environmental processes, especially the water cycle. The American geochemist Harmon Craig, once a graduate student of Urey's, discovered the relationship between rainwater's hydrogen and oxygen isotope ratios. The linear correlation between the two heavy isotopes is conserved worldwide and referred to as the Global Meteoric Water Line. By the late 1960s, the focus of hydrogen isotopes shifted away from water and towards organic molecules. Plants use water to form biomass, but a 1967 study by Zebrowski, Ponticorvo, and Rittenberg found that the organic material in plants had less deuterium than the water source. Zebrowski's research measured the deuterium concentration of fatty acids and amino acids derived from sediments in the Mohole drilling project. Further studies by Bruce Smith and Samuel Epstein in 1970 confirmed the depletion of deuterium in organics compared to environmental water. Another duo in 1970, Schiegl and Vogel, analyzed the composition of hydrogen isotopes as water became biomass, as biomass became coal and oil, and as oil became natural gas. In each step they found deuterium further depleted. A landmark paper in 1980 by Marilyn Epstep, now M. Fogel, and Thomas Hoering titled "Biogeochemistry of the stable hydrogen isotopes" refined the links between organic materials and sources. In this early stage of hydrogen stable isotope study, most isotope compositions or fractionations were reported as bulk measurements of all organic material or all inorganic material. Some exceptions include cellulose and methane, as these compounds are easily separated. Another advantage of methane for compound specific measurements is the lack of hydrogen exchange. Cellulose has exchangeable hydrogen, but chemical derivatization can prevent swapping of cellulose hydrogen with water or mineral hydrogen sources. Cellulose and methane studies in the 1970s and 1980s set the standard for modern hydrogen isotope geochemistry. Measurements of individual compounds was made possible in the late 1990s and early 2000s with advancements in mass spectrometry. The Thermo Delta+XL transformed measurements as the first instrument capable of compound specific isotope analysis. It was then possible to look at smaller samples with more precision. Hydrogen isotope applications quickly emerged in petroleum geochemistry by measuring oil, paleoclimatology by observing lipid biomarkers, and ecology by constructing trophic dynamics. Modern advances are currently underway in the clumped isotope composition of methane after development of the carbonate thermometer. Precise measurements are also enabling focus on microbial biosynthetic pathways involving hydrogen. Ecologists studying trophic levels are especially interested in compound specific measurements for construction of past diets and tracing predator-prey relationships. Highly advanced machines are now promising position specific hydrogen isotope analysis of biomolecules and natural gases. Important concepts Stable vs radioactive isotopes All isotopes of a chemical element contain the same number of protons with varying numbers of neutrons. The element hydrogen has three naturally occurring isotopes, H, H and H, which are sometimes referred to as protium (H), deuterium (D) and tritium (T), respectively. Both H and H are stable indefinitely, while H is unstable and undergoes beta decay to form He. While there are some important applications of H in geochemistry (such as its use as an ocean circulation tracer) these will not be discussed further here. Isotope notation The study of stable isotope biogeochemistry involves the description of the relative abundances of various isotopes in a certain chemical pool, as well as the way in which physicochemical processes change the fraction of those isotopes in one pool vs. another. Various type of notation have been developed to describe the abundance and change in the abundance of isotopes in these processes, and these are summarized below. In most cases only the relative amounts of an isotope are of interest, the absolute concentration of any one isotope is of little importance. Isotope ratio and fractional abundance The most fundamental description of hydrogen isotopes in a system is the relative abundance of deuterium and protium. This value can be reported as the isotope ratio R or the fractional abundance F defined as: ^2R\ =\ \frac{^2H}{^1H} and ^2F\ =\ \frac{^2H}{{^1H}+{^2H}} where H and H are the amounts of deuterium and protium, respectively. Fractional abundance is equivalent to mole fraction, and yields atom percent when multiplied by 100. In some instances atom percent excess is used, which reports the atom percent of a sample minus the atom percent of a standard. Delta (δ) notation Isotope ratios for a given substance are often reported compared to a standard with known isotopic composition, and measurements of relative masses are always made in conjuncture with measuring a standard. In the case of hydrogen the Vienna Standard Mean Ocean Water standard is used which has an isotope ratio of 155.76 ±0.1 ppm. The delta value as compared to this standard is defined as: \delta^2H_{VSMOW}\ =\ \frac{^2R_{sample}}{^2R_{VSMOW}}-1 These delta values are often quite small, and are usually reported as per mil values (‰) which come from multiplying the above equation by a factor of 1000. Measures of fractionation The study of hydrogen isotope biogeochemistry relies on the fact that various physicochemical processes will preferentially enrich or deplete deuterium relative to protium (see kinetic isotope effect, etc.). There are various measures that have been developed to describe the fractionation in an isotope between two pools, often the product and reactant of a physiochemical process. α notation describes the difference between two hydrogen pools A and B with the following equation: \alpha_{A/B}\ =\ \frac{^2R^{A}}{^2R^{B}} where δH is the delta value of pool A relative to VSMOW. As many delta values do not vary greatly from one another the α value is often very close to unity. A related measure called epsilon (ε) is often used which is given simply by: These values are often very close to zero, and are reported as per mill values by multiplying α-1 by 1000. One final measure is Δ, pronounced "cap delta", which is simply: \Delta_{A/B}\ =\ \delta^2H^{A}-\delta^2H^{B} Conservation of mass in mixing calculations As discussed above, deuterium and protium are stable isotopes which never undergo radioactive decay. Therefore, the D/H ratio of a pool containing hydrogen will remain constant as long as no hydrogen is added or removed from the system, a property known as conservation of mass. When two pools of hydrogen A and B mix with molar amounts of hydrogen m and m, each with their own starting fractional abundance of deuterium (F and F), then the fractional abundance of the resulting mixture is given by the following exact equation: The terms with Σ represent the values for the combined pools. It is often common to find the following approximation used for calculations regarding the mixing of two pools with a known isotopic composition: This approximation is convenient and applicable with little error in most applications having to deal with pools of hydrogen from natural processes. The maximum difference between the calculated delta value with the approximate and exact equations is given by the following equation: This error is quite small for nearly all mixing of naturally occurring isotope values, even for hydrogen which can have quite large natural variations in delta values. The estimation is usually avoided when unnaturally large isotope delta values are encountered, which is particularly common in isotopic labeling experiments. Naturally occurring isotope variation Natural processes result in broad variations in the D/H ratio found in different pools of hydrogen. Kinetic isotope effects and physical changes such as precipitation and evaporation lead to these observed variations. Ocean water varies slightly, between 0 and −10 per mil, while atmospheric water can be found to vary between approximately −200 to +100 per mil. Biomolecules synthesized by organisms will retain some of the D/H signature of the water which they were grown on, plus a large fractionation factor which can be as great as several hundred per mil. Large D/H differences amounting to thousands of per mil can be found between Earth and other planetary bodies such as Mars, likely due to variations in isotope fractionation during planet formation and the physical loss of hydrogen to space. List of well known fractionation effects A number of common processes fractionate hydrogen isotopes to produce the isotope variations found in nature. Common physical processes include precipitation and evaporation. Chemical reactions also have the potential to heavily influence the partitioning of heavy and light isotopes between pools. The rate of a chemical reaction depends in part on the energies of the chemical bonds being formed and broken in the reaction. Since different isotopes have different masses, the bond energies are different between different isotopologues of a chemical species. This will result in a difference in the rate of a reaction for the different isotopologues, resulting in a fractionation of the different isotopes between the reactant and product in a chemical reaction. This is known as the kinetic isotope effect. A classic example of such an isotope effect is the D/H ratio difference in the equilibrium between HO and H which can have an alpha value of as much as 3–4. Isotope ratio as tracer for fingerprint In many areas of study the origin of a chemical or group of chemicals is of central importance. Questions such as the source of environmental pollutants, the origin of hormones in an athlete's body, or the authenticity of foods and flavorings are all examples where chemical compounds need to be identified and sourced. Hydrogen isotopes have found uses in these and many other diverse areas of study. Since many processes can affect the D/H ratio of a given chemical compound this ratio can be a diagnostic signature for compounds produced in a specific location or via a certain process. Once the D/H ratios of a number of sources are known the measurement of this ratio for a sample of unknown origin can often be used to link it back to a certain source or production method. Physical chemistry Hydrogen isotope formation Protium or hydrogen-1, with one proton and no neutrons, is the most abundant element in the Solar System, formed in the earliest rounds of stellar explosions after the Big Bang. After the universe exploded into life, the hot and dense cloud of particles began to cool, first forming subatomic particles like quarks and electrons, which then condensed to form protons and neutrons. Elements larger than hydrogen and helium were produced with successive stars, forming from the energy released during supernovae. Deuterium or hydrogen-2, with one proton and one neutron, is also known to have cosmic origin. Like protium, deuterium was produced very early in the universe's history during the Big Bang nucleosynthesis. As protons and neutrons combined, helium-4 was produced with a deuterium intermediate. Alpha reactions with helium-4 produce many of the larger elements that dominate today's solar system. However, before the universe cooled, high-energy photons destroyed any deuterium, preventing larger element formation. This is referred to as the deuterium bottleneck, a restriction on the timeline for nucleosynthesis. All of today's deuterium originated from this proton-proton fusion after sufficient cooling. Tritium, or hydrogen-3, with one proton and two neutrons, was produced by proton and neutron collisions in the early universe as well, but it has since radioactively decayed to helium-3. Modern tritium cannot be explained by Big Bang nucleosynthesis because of tritium's short half-life of 12.3 years. Today's tritium concentration is instead governed by nuclear reactions and cosmic rays. The radioactive beta decay of tritium to helium releases an electron and an antineutrino, with an average energy release of 18.6 MeV. It is important to note that this is classified as a relatively weak beta reaction, so the radioactivity cannot permeate skin. Tritium is thus only hazardous if directly ingested or inhaled. Quantum properties Protium is a spin-½ subatomic particle and is therefore a fermion. Other fermions include neutrons, electrons, and the radioactive isotope tritium. Fermions are governed by Pauli's exclusion principle, where no two particles can have the same quantum number. However, bosons like deuterium and photons, are not bound by exclusion and multiple particles can occupy the same energy state. This fundamental difference in 1H and 2H manifests in many physical properties. Integer spin particles like deuterium follow Bose-Einstein statistics while fermions with half integer spins follow Fermi-Dirac statistics. Wave functions that describe multiple fermions must be antisymmetric with respect to swapping particles, while boson wave functions are symmetric. Because bosons are indistinguishable and can occupy the same state, collections of bosons behave very differently than fermions at colder temperatures. As bosons are cooled and relaxed to the lowest energy state, phenomena like superfluidity and superconductivity occur. Kinetic and equilibrium isotope effects   Isotopes differ according to their number of neutrons, which directly impacts physical properties based on mass and size. Typical hydrogen is called hydrogen-1 or protium and has no neutrons. Deuterium or hydrogen-2 has one neutron and tritium or hydrogen-3 has two neutrons. These additional neutrons significantly impact the mass of the element, leading to different chemical physical properties. This effect is especially prevalent in hydrogen isotopes, since addition of a neutron doubles the mass from protium to deuterium. For higher order elements like carbon, oxygen, nitrogen, or sulfur, the mass difference is diluted. Physical chemists often model bonding with the quantum harmonic oscillator, simplifying a hydrogen-hydrogen bond as two balls connected by a spring. The quantum harmonic oscillator is itself based on Hooke's Law and acts as a good approximation of the Morse potential that accurately describes bonding. Modeling hydrogen and deuterium in a chemical reaction demonstrates the energy distributions of isotopes in products and reactants. Lower energy levels for the heavier isotope deuterium can be explained mathematically by the harmonic oscillator's dependence on the inverse of the reduced mass, denoted μ. Thus, a larger reduced mass is a larger denominator and thus a smaller zero point energy and a lower energy state in the quantum well. Calculating the reduced mass of a hydrogen-hydrogen bond versus a deuterium-deuterium bond gives: The quantum harmonic oscillator has energy levels of the following form, where k is the spring constant and h is Planck's constant. The effects of this energy distribution manifest in the kinetic isotope effect and the equilibrium isotope effect. In a reversible reaction, under equilibrium conditions, the reaction will proceed forwards and backwards, distributing the isotopes to minimize thermodynamic free energy. Some time later, at equilibrium, more heavy isotopes will be on the product side. The stability of the lower energy drives the products to be enriched in deuterium relative to reactants. Conversely, under kinetic conditions, reactions are generally irreversible. The limiting step in the reaction is overcoming the activation energy barrier to reach an intermediate state. The lighter isotope has a higher energy state in the quantum well and will thus be preferentially formed into products. Thus under kinetic conditions the product will be relatively depleted in deuterium. Kinetic isotope effects are common in biological systems and are especially important for hydrogen isotope biogeochemistry. Kinetic effects usually result in larger fractionations than equilibrium reactions. In any isotope system, kinetic effects are stronger for larger mass differences. Light isotopes in most systems also tend to move faster but form weaker bonds. At high temperatures, entropy explains a large signal in isotope composition. However, when temperature decreases isotope effects are more expressed and randomness plays less of a role. These general trends are exposed in further understanding of bond breaking, diffusion or effusion, and condensation or evaporation reactions. Chemistry of hydrogen exchange One of the major complications in studying hydrogen isotopes is the issue of exchangeability. At many time scales, ranging from hours to geological epochs, scientists have to consider if the hydrogen moieties in studied molecules are the original species or if they represent exchange with water or mineral hydrogen near by. Research in this area is still inconclusive in regards to rates of exchange, but it is generally understood that hydrogen exchange complicates the preservation of information in isotope studies. Rapid exchange Hydrogen atoms easily separate from electronegative bonds such as hydroxyl bonds (O-H), nitrogen bonds (N-H), and thiol/mercapto bonds (S-H) on hour to day long timescales. This rapid exchange is particularly problematic for measurements of bulk organic material with these functional groups because isotope compositions are more likely to reflect the source water and not the isotope effect. For this reason, records of paleoclimate that are not measuring ancient waters, rely on other isotopic markers. Advancements in the 1990s held promising potential to resolve this problem: samples were equilibrated with two variations of heavy water and compared. Their ratios represent an exchange factor that can calibrate measurements to correct for hydrogen and deuterium swapping. Carbon bound hydrogen exchange For some time, researchers believed that large hydrocarbon molecules were impervious to hydrogen exchange, but recent work has identified many reactions that allow isotope reordering. The isotopic exchange becomes relevant at geological time scales and has impacted work of biologists studying lipid biomarkers as well as geologists studying ancient oil. Reactions responsible for exchange include Radical reactions that cleave C-H bonds. Ionic exchange that of tertiary and aromatic hydrogen. Enolizations that activate hydrogens on ketone alpha carbons. Stereochemical exchange that causes stereochemical inversion. Constitutional exchange like methyl shifts, double bond migrations and carbon backbone rearrangements. Detailed kinetics of these reactions have not been determined. However, it is known that clay minerals catalyze ionic hydrogen exchange faster than other minerals. Thus hydrocarbons formed in clastic environments exchange more than those in carbonate settings. Aromatic and tertiary hydrogen also have greater exchange rates than primary hydrogen. This is due to the increasing stability of associated carbocations. Primary carbocations are considered too unstable to physically exist and have never been isolated in an FT-ICR spectrometer. On the other hand, tertiary carbocations are relatively stable and are often intermediates in organic chemistry reactions. This stability, which increases the likelihood of proton loss, is due to the electron donation of nearby carbon atoms. Resonance and nearby lone pairs can also stabilize carbocations via electron donation. Aromatic carbons are thus relatively easy to exchange. Many of these reactions have a strong temperature dependence, with higher temperatures typically accelerating exchange. However, different mechanisms may prevail at each temperature window. Ionic exchange, for example, has the most significance at low temperatures. In such low temperature environments, there is potential for preserving the original hydrogen isotope signal over hundreds of millions of years. However, many rocks in geologic time have reached significant thermal maturity. Even by the onset of the oil window it appears that much of the hydrogen has exchanged. Recently, scientists have explored a silver lining: hydrogen exchange is a zero order kinetic reaction (for carbon bound hydrogen at 80–100 °C, the half-times are likely 104 – 105 years). Applying the mathematics of rate constants would allow extrapolation to original isotopic compositions. While this solution holds promise, there is too much disagreement in the literature for robust calibrations. Vapor isotope effects Vapor isotope effects occur for protium, deuterium, and tritium, because each isotope has different thermodynamic properties in the liquid and gaseous phases. For water molecules, the condensed phase is more enriched while the vapor is more depleted. For example, rain condensing from a cloud will be heavier than the vapor starting point. Generally, the large variations in deuterium concentrations of water are from the fractionations between liquid, vapor, and solid reservoirs. In contrast to the fractionation pattern of water, non-polar molecules like oils and lipids, have gaseous counterparts enriched with deuterium relative to the liquid. This is thought to be associated with the polarity from hydrogen bonding in water that does not interfere in long-chain hydrocarbons. Observed variations in isotope abundance Due to physical and chemical fractionation processes, the variations in the isotopic compositions of elements are reported, and the standard atomic weights of hydrogen isotopes have been published by the Commission on Atomic Weights and Isotopic Abundances of the IUPAC. The ratios of stable H isotopes are reported relative to the International Atomic Energy Agency (IAEA) reference water. In the equilibrium isotope reactions of Hydrogen and Deuterium in general, enrichment of the heavy isotope is observed in the compound with the higher oxidation state. However, in our natural environment, the isotopic composition of hydrogen isotopes greatly vary depending on the sources and organisms due to complexities of interacting elements in disequilibrium states. In this section, the observed variations in hydrogen isotope abundances of water sources (hydrosphere), living organisms (biosphere), organic substances (geosphere), and extraterrestrial materials in the Solar system are described. Hydrosphere Oceans Variations in δD value of different water sources and ice caps are observed due to evaporation and condensation processes. [See section 6 for more details] When the ocean water is well-mixed, the δD at equilibrium is close to 0‰ (‰ SMOW) with a D/H ratio of 0.00015576. However, continuous variations in δD values are caused by evaporation or precipitation processes which lead to disequilibrium in fractionation processes. A large H isotopic gradient (variations in δD values) is observed in surface waters of the oceans, and the fluctuation value in the Northwest Atlantic surface water is around 20‰. According to the data examining the southern supersegment of the Pacific Ocean, as the latitude (˚S) decreases from −65˚S to −40˚S, the δD value fluctuates between around −50‰ and −70‰. The isotope composition of seawater (not just the surface water) is mostly in the range of 0-(-10) ‰. The estimates of the δD values for different parts of the oceans across the world are shown on the map. Ice caps The typical δD values for the ice sheets in the polar regions range from around −400‰ to −300‰ (‰SMOW). The δD values for ice caps are affected by the distance from the open ocean, latitude, atmospheric circulation as well as the amount of insolation and temperature. The temperature change affects the deuterium content of ice caps, so the H/D isotopic composition of ice can give estimates for the historical climate cycles such as the timelines for interglacial and glacial periods. [See section 7.2. Paleo-reconstruction for more details] The δD values of ice caps from 70 km south of Vostok Station and in East Antarctica are −453.7‰ and −448.4‰ respectively, and are shown on the map. Atmosphere The analysis done based on satellite measurement data estimates the δD values for the atmosphere in various regions of the world. The general trend is that the δD values are more negative at higher-latitude regions, so the atmosphere above the Antarctica and the arctic regions is observed to be highly D-depleted to around −230‰ to −260‰ or even lower. The estimates of the atmospheric δD values are shown on the map. A vast portion of the global atmospheric water vapor comes from the western Pacific Ocean near the tropical zone, (mean 2009) and the H/D isotopic composition of atmosphere varies depending on the temperature and humidity. In general, higher δD values are observed in humid regions with a high temperature. Water vapor in the atmosphere is in general more depleted than the terrestrial water sources, since the rate of evaporation for 1HO is faster than 1HD16O due to a higher vapor pressure. On the other hand, the rain water (precipitation) is in general more enriched than the atmospheric water vapor. Precipitation The δD values of the annual precipitation in different regions of the world are shown on the map. The precipitation is more D-enriched near the equator in the Tropical regions. The values of δD generally fall in the range of around −30~-150‰ in the northern hemisphere and −30~+30‰ over the land areas of the southern hemisphere. In North America, the δD values of average monthly precipitation across regions are more negative in January (ranging up to around −300‰ in Canada) compared to July (up to around −190‰). The overall mean precipitation is determined by balance between the evaporation of water from the oceans and surface water sources and the condensation of the atmospheric water vapor in the form of rain. The net evaporation should equal the net precipitation, and the δD value for the mean isotopic composition of global precipitation is around −22‰ (global average). The Global Network of Isotopes in Precipitation (GNIP) investigates and monitors the isotopic composition of precipitation at various sites all over the world. The mean precipitation can be estimated by the equation, δ2H = 8.17(±0.07) δ18O + 11.27(±0.65)‰ VSMOW. (Rozanski et al., 1993) This equation is the slightly modified version from the general 'Global Meteoric Water Line (GMWL)' equation, δ2H = 8.13δ18O + 10.8, which provides the average relationship between δ2H and δ18O of natural terrestrial waters. Lakes and rivers The δD values vs. VSMOW of lakes in different regions are shown on the map. The general pattern observed indicates that the δD values of the surface waters including lakes and rivers are similar to that of local precipitation. Soil water The isotopic composition of soil is controlled by the input of precipitation. Therefore, the δD values of soil across regions are similar to that of local precipitation. However, due to evaporation, soil tends to be more D-enriched than precipitation. The degree of enrichment varies greatly depending on the atmospheric humidity, local temperature as well as the depth of the soil beneath the surface. According to the study done by Meinzer et al. (1999), as the depth in the soil increases, the δD of soil water decreases. Biosphere Marine algae The factors affecting δD values of algal lipids are the following: δD of water, algal species (up to 160%), lipid type (up to 170%), salinity (+0.9±0.2% per PSU), growth rate (0 ~ -30% per day) and temperature (−2 ~ -8% per °C). In the study done by Zhang et al. (2009), the δD values of fatty acids in Thakassiosira pseudonana chemostat cultures were −197.3‰, −211.2‰ and −208.0‰ for C14, C16 and C18 fatty acids respectively. Moreover, the δD value of C16 fatty acid in an algal species named A. E. unicocca at 25 °C was determined using the empirical equation y = 0.890x – 91.730 where x is the δD of water at harvest. For another algal species named B. V. aureus, the equation was y = 0.869x −74.651. The degree of D/H fractionation in most algal lipids increases with increasing temperature and decreases with increasing salinity. The growth rates have different impacts on the D/H fractionation depending on the species types. Phytoplankton and bacteria The δD values of lipids from phytoplankton is largely affected by δD of water, and there seems to be a linear correlation between those two values. The δD of most other biosynthetic products found in phytoplankton or cyanobacteria are more negative than that of the surrounding water. The δD values of fatty acids in methanotrophs living in seawater lie between −50 and −170‰, and that of sterols and hopanols range between −150 and −270‰. The H isotopic composition of photoautotrophs can be estimated using the equation below: , where , and are the D/H ratios of lipids, water, and substrates, respectively. is the mole fraction of lipid H derived from external water, whereas and denote the net isotopic fractionations associated with uptake and utilization of water and substrate hydrogen, respectively. For Phototrophs, is calculated assuming that equals to 1. The isotopic fractionation between lipids and methane () is 0.94 for fatty acids and 0.79 for isoprenoid lipids. The isotopic fractionation between lipids and water () is 0.95 for fatty acids and 0.85 for isoprenoid lipids. For plants and algae, the isotopic fractionation between lipids and methane () is 0.94 for fatty acids and 0.79 for isoprenoid lipids. The δD values for lipids in bacterial species Lipids in organisms growing on heterotrophic substrates: Growing on sugar: depletion of 200‰ ~ 300‰ relative to water Growing on direct precursor of TCA cycle (e.g. acetate (δDs = -76‰) or succinate): enrichment of −50‰ ~ +200‰ relative to water : -150‰ ~ +200‰ Lipids in organisms growing photoautotrophically: Depletion of 50‰ ~ 190‰ relative to water : -150‰ ~ -250‰ Lipids in organisms growing chemoautotrophically: : -200‰ ~ -400‰ Plants δD values for n-C29 alkane(‰) vs. VSMOW for different plant groups are the following. In the equations, y represents δD values for n-C29 alkane(‰) vs. VSMOW, and x represents δD values for mean annual precipitation (‰) vs. VSMOW). For plant leaf wax, the relative humidity, the timing of leaf wax formation and the growth conditions including light levels affect the D/H fractionation of plant wax. From the Craig–Gordon model, it can be understood that leaf water in the growth chamber gasses is significantly D-enriched due to transpiration. Sugars The relative global abundance of D in plants is in the following order: phenylpropanoids > carbohydrates > bulk material > hydrolysable lipids > steroids. In plants, δD values of carbohydrates, which typically range around -70‰ to -140‰, are good indicators of the photosynthetic metabolism. Photosynthetically produced Hydrogens which are bound to carbon backbones are around 100–170‰ more D-depleted than the water found in plant tissues. The heterotrophic processing of carbohydrates involves isomerization of triose phosphates and interconversion between fructose-6-phosphate and glucose-6-phosphate. These cellular processes promote the exchange between organic H and H2O within the plant tissues leading to around 158‰ of D-enrichment of those exchanged sites. The δD of C3 plants such as Sugar beet, orange and grape ranges from −132‰ to −117‰, and that of C4 plants such as sugar cane and maize ranges from −91‰ to −75‰. The δD of CAM such as pineapple is estimated to be around −75‰. Sugar beet and sugar cane contain sucrose, and maize contain glucose. Orange and pineapple are the sources of glucose and fructose. The deuterium content of the sugars from the above plant species are not distinctive. In C3 plants, Hydrogens attached to Carbons in 4 and 5 positions of the glucose typically come from NADPH in the photosynthetic pathway, and are found to be more D-enriched. Whereas in C4 plants, Hydrogens attached to Carbons 1 and 6 positions are more D-enriched. D-enrichment patterns in CAM species tend to be closer to that in C3 species. Bulk organic matter The H/D isotopic composition of the leaf water is variable during the biosynthesis, and the enrichment in the whole leaf can be described by the equation, △Dleaf = △De * ((1-e−p)/P) The typical δD value of bulk plant is around −160‰ where δD values for cellulose and lignin are −110‰ and −70‰ respectively. Animals The hydrogen isotopic composition in animal tissues are difficult to estimate due to complexities in the diet intake and the isotopic composition of surrounding water sources. When fish species were investigated, average hydrogen isotopic composition of proteins was in a large range of −128 ‰ ~ +203 ‰. In the bulk tissue of organisms, all lipids were found to be D-depleted, and the values of δD for lipids tend to be lower than that for proteins. The average δD for Chironomid and fish protein was estimated to be in the range of −128‰ to +203‰. Most hydrogens in heterotrophic tissues come from water not from diet sources, but the proportion coming from water varies. In general, Hydrogen from water is transferred to NADPH and then taken up to the tissues. An apparent trophic effect (compounding effect) can be observed for δD in heterotrophs, so significant D-enrichments result from the intake of surrounding water the in aquatic food webs. The δD of proteins in animal tissues are in cases affected more by diet sources than by surrounding water. Although different δD values for the same class of compounds may arise in different organisms growing in water with the same δD value, those compounds generally have the same δD value within each organism itself. [See Section 7.5. Ecology for more details] Lipids The δD values of fatty acids found in living organisms typically range from −73‰ to −237‰. The values of δD for individual fatty acids vary widely between cultures (−362‰ to +331‰), but typically by less than around 30‰ between different fatty acids from the same species. The differences in δD for the compounds within the same lipid class is generally smaller than 50‰, whereas the difference falls in the range of 50–150‰ for the compounds in different lipid classes. δD values for typical lipid groups are determined using the following equation: where = net or apparent fractionation, = lipid product and = source water. The δD for common lipid classes found in living organisms are the following: n-alkyl: -170 ± 50‰ (113–262‰ more D-depleted than growth water) isoprenoid: -270 ± 75‰ (142–376‰ more D-depleted than growth water) phytol: -360 ± 50‰ (more depleted than the other two categories) Polyisoprenoid lipids are more depleted than acetogenic (n-alkyl) lipids with more negative δD values. Geosphere Oil Oil samples from northeast Japan: from −130‰ to around −110‰ with higher maturity. Oil samples from Portiguar Basin: -90‰ (lancustrine environment), -120‰ to -135‰ (marine-evaporitic environment), Alkenones The isotopic composition of alkenones often reflect the isotopic enrichment or depletion of the surrounding environment, and the δD values of alkenones in different regions are shown on the map. Coals According to the studies done by Reddings et al., δD for coals from various sources range from around −90‰ to −170‰. The δD values of coals in different regions are shown on the map. Natural gas Methane Methane produced from marine methanogens is typically more D-enriched than methane produced from methanogens grown in freshwater. The δD values for thermogenic methane range from −275‰ to −100‰, and from −400‰ to −150‰ for microbial methane. H2 gas The δD value observed for atmospheric H2 is around +180‰, which is the biggest delta value observed for natural terrestrials. (The mole fraction of 2H: 0.0001838) The δD value for natural gas from a Kansas well is around −836‰ (The mole fraction of Deuterium is 0.0000255) During the process of electrolysis of water, hydrogen gas is produced at the cathode, but an incomplete electrolysis of water may cause isotopic fractionation leading to enrichment of D in the sample water and the production of hydrogen gas with deuterium components. Mineral H The δD values of hydroxyl-bearing minerals of mantle were estimated to be −80‰ ~ -40‰ through the analysis of the isotopic composition for juvenile water. Hydrogen Minerals generally have large isotope effects, and the isotopic composition often follows the pattern observed for precipitation. Clay minerals The D/H fractionations in clays such as kaolinite, illite, smectite are in most cases consistent when no significant external forces are applied under constant temperature and pressure. The following is an empirically determined equation for estimating the D/H fractionation factor: 1000 In αkaolinite-water = -2.2 × 106 × T−2 – 7.7. The δD values vs. ‰SMOW for Hydrogen minerals found in mantle, Metamorphic rock, shales, marine clays, marine carbonates and sedimentary rocks are shown in the table. Extraterrestrial objects Variations of D/H ratio in the Solar System Earth The H isotope composition of mantle rocks on earth is highly variable, and that of mantle water is around −80‰ ~ −50‰ depending on its states such as fluid, hydrous phase, hydroxyl point defect, Juvenile water (from degassing of the mantle), magmatic water (water equilibrated with a magma). Sun The D/H ratio of the Sun is around 21 ± 5 × 10−6. Mars The current Hydrogen isotope composition is enriched by a factor of 5 relative to terrestrial ocean water due to continual losses of H in Martian atmosphere. Therefore, the δD value is estimated to be around +4000‰. The D/H ratios for Jupiter and Saturn is nearly in the order of 10−5, and the D/H ratios of Uranus and Neptune is closer to the order of 10−4. Hydrogen is the most abundant element in the universe. Variations in isotopic composition of extraterrestrial materials stem from planetary accretion or other planetary processes such as atmospheric escape, and are larger for H and N than for C and O. The preservation of D-enrichment is observed in chondritic meteorites, interplanetary dust particles and cometary volatiles. From the Helium isotope abundance data, the cosmic D/H value is estimated to be around 20 ppm which is much lower than the terrestrial D/H ratio of 150 ppm. The enrichment of D/H from the proto-solar reservoir occurs for most of the planets except for Jupiter and Saturn, the massive gaseous planets. The D/H ratios of the atmospheres of Venus and Mars are ~2 × 10−2 and ~8 × 10−4 respectively. The D/H ratios of Uranus and Neptune is larger than that of protosolar reservoir by a factor of around 3 due to their Deuterium-rich icy cores. The D/H ratios for comets are much larger than the values for the planets in the Solar System with δD value of around 1000‰. The Hydrogen isotope compositions in the galaxy and the Solar System are shown in the table. Measurement techniques Determination of D/H ratio can be performed with a combination of different preparation techniques and instruments for different purposes. There are several basic categories of hydrogen isotope measurements: (i) organic hydrogen or water are converted to H2 first, followed by high precision IRMS (Isotope-ratio mass spectrometry) measurement with high precisions; (ii) D/H and 18O/16O are directly measured as H2O by laser spectroscopy also with high precisions; (iii) the intact molecules are directly measured by NMR or mass spectrometry with relatively lower precision than IRMS. Offline combustion and reduction The conversion to simple molecules (i.e. H2 for hydrogen) is required prior to IRMS measurement for stable isotopes. This is due to several reasons with regard to hydrogen: The classical offline preparation for the conversion is combustion over CuO at > 800 °C in sealed quartz tubes, followed by the isolation of resulting water and the reduction to H2 over hot metal at 400 ~1000 °C on a vacuum line. The produced gas is then directly injected into the dual-inlet mass spectrometer for measurement. The metals used for the reduction to H2 includes U, Zn, Cr, Mg and Mn, etc. U and Zn had been widely used since the 1950s until Cr was successfully employed in the late 1990s. The offline combustion/reduction has the highest accuracy and precision for hydrogen isotope measurement without limits for sample types. The analytical uncertainty is typically 1~2‰ in δD. Thus it is still being used today when highest levels of precision are required. However, the offline preparation procedure is very time-consuming and complicated. It also requires large sample size (several 102 mg). Thus the online preparation based on combustion/reduction coupled with the subsequent continuous flow-IRMS (CF-IRMS) system has been more commonly used nowadays. Chromium reduction or high temperature conversion are the dominant online preparation methods for the detection of hydrogen isotope by IRMS. High temperature conversion/elemental analyzer (TC/EA) TC/EA (or HTC, high temperature conversion; HTP, high temperature pyrolysis; HTCR, high temperature carbon reduction) is an 'online' or 'continuous flow' preparation method typically followed by IRMS detection. This is a "bulk" technique that measures all of the hydrogen in a given sample and provides the average isotope signal. The weighed sample is placed in a tin or silver capsule and dropped into a pyrolysis tube of TC/EA. The tube is made of glassy carbon with glassy carbon filling in which way oxygen isotope can be measured simultaneously without the oxygen exchange with ceramic (Al2O3) surface. The molecules are then reduced into CO and H2 at high temperature (> 1400 °C) in the reactor. The gaseous products are separated through gas chromatography (GC) using helium as the carrier gas, followed by a split-flow interface, and finally detected by IRMS. TC/EA method can be problematic for organic compounds with halogen or nitrogen due to the competition between the pyrolysis byproducts (e.g. HCl and HCN) and H2 formation. In addition, it is susceptible to contamination with water, so samples must be scrupulously dried. An adaption of this method is to determine the non-exchangeable (C-H) and exchangeable hydrogen (bounds to other elements, e.g. O, S and N) in organic matter. The samples are equilibrated with water in sealed autosampler carousels at 115 °C and then transferred into pyrolysis EA followed by IRMS measurement. TC/EA method is quick with a relatively high precision (~ 1‰). It was limited to solid samples, however, liquid sample recently can also be measured in TC/EA-IRMS system by adapting an autosampler for liquids. The drawback of TC/EA is the relatively big sample size (~ mg), which is smaller than offline combustion/reduction but larger than GC/pyrolysis. It cannot separate different compounds as GC/pyrolysis does and thus only the average for the whole sample can be provided, which is also a drawback for some research. Gas chromatography/pyrolysis (GC/pyrolysis) GC-interface (combustion or pyrolysis) is also an online preparation method followed by IRMS detection. This is a 'compound-specific' method, allowing separation of analytes prior to measurement and thus providing information about the isotopic composition of each individual compound. Following GC separation, samples are converted to smaller gaseous molecules for isotope measurements. GC/pyrolysis uses the pyrolysis interface between GC and IRMS for the conversion of H and O in the molecules into H2 and CO. GC-IRMS was first introduced by Matthews and Hayes in the late 1970s, and was later used for δ13C, δ15N, δ18O and δ34S. Helium is used as the carrier gas in the GC systems. However, the separation of DH (m/z=3) signal from the tail of 4He+ beam was problematic due to the intense signal of 4He+. During the early 1990s, intense efforts were made in solving the difficulties to measure δD by GC/pyrolysis-IRMS. In 1999, Hilkert et al. developed a robust method by integrating the high temperature conversion (TC) into GC-IRMS and adding a pre-cup electrostatic sector and a retardation lens in front of the m/z=3 cup collector. Several different groups were working on this at the same time. This GC/pyrolysis-IRMS based on TC has been widely used for δD measurement nowadays. The commercial products of GC-IRMS include both combustion and pyrolysis interfaces so that δ13C and δD can be measured simultaneously. The significant advantage of GC/pyrolysis method for hydrogen isotope measurement is that it can separate different compounds in the samples. It requires the smallest sample size (a typical size of ~ 200 ng) relative to other methods and also has a high precision of 1~5 ‰. But this method is relatively slow and limited to the samples which can be applied in GC system. Laser spectroscopy Laser Spectroscopy (or Cavity ring-down spectroscopy, CRDS) is able to directly measure D/H, 17O/16O and 18O/16O isotope compositions in water or methane. The application of laser spectroscopy to hydrogen isotopes was first reported by Bergamaschi et al. in 1994. They directly measured 12CH3D/12CH4 in atmospheric methane using a lead salt tunable diode laser spectroscopy. The development of CRDS was first reported by O'Keefe et al. in 1988. In 1999, Kerstel et al. successfully applied this technique to determine D/H in water sample. The system consists of a laser and a cavity equipped with high finesse reflectivity mirrors. Laser light is injected into the cavity, at which the resonance takes place due to the constructive interference. The laser then is turned off. The decay of light intensity is measured. In the presence of a water sample, the photo-absorption by water isotopologues follows the kinetic law. The optical spectrum is obtained by recording ring-down time of the H2O spectral features of interest at certain laser wavelength. The concentration of each isotopologue is proportional to the area under each measured isotopologue spectral feature. Laser Spectroscopy is a quick and simple procedure, relatively lower cost and the equipment is portable. So it can be used in the field for measuring water samples. D/H and 18O/16O can be determined simultaneously from a single injection. It requires a small sample size of < 1 μL for water. The typical precision is ~ 1‰. However, this is the compound-specific instrument, i.e. only one specific compound can be measured. And coexisting organic compounds (i.e. ethanol) could interfere with the optical light absorption features of water, resulting in cross-contamination. SNIF-NMR 2H-Site-specific Natural Isotope Fractionation-Nuclear Magnetic Resonance(2H-SNIF-NMR) is a type of NMR specialized in measuring the deuterium concentration of organic molecules at natural abundances. The NMR spectra distinguishes hydrogen atoms in different chemical environments (e.g. The order of carbon that hydrogen binds to, adjacent functional groups, and even geminal positions of methylene groups), making it a powerful tool for position-specific isotope analysis. The chemical shift (in frequency units) of 2H is 6.5 times lower than that of 1H. Thus, it is difficult to resolve 2H peaks. To provide high-enough resolution to separate 2H peaks, high strength magnetic field instruments (~11.4T) are applied. Application of NMR to study hydrogen isotopes of natural products was pioneered by G'erard Martin and his co-workers in the 1980s. For several decades it has been developed and expanded. The D/H NMR measurement is sometimes coupled with IR-MS measurement to create a referential standard. The sensitivity of SNIF-NMR is relatively low, typically requiring ~1 mmol of samples for each measurement. The precision with respect to isotope ratio is also relatively poor compared with mass spectrometry. Even the state-of-art instruments can only measure D/H ratios with around 50~200‰ error depending on the compound. Therefore, so far technique can only distinguish the large D/H variations in preserved materials. In 2007, Philippe Lesot and his collezzes advanced this technique with a 2-Dimensional NMR using chiral liquid crystals (CLCs) instead of isotropic solvents to dissolve organic molecules. This enables the measurements of quadrupolar doublets for each nonequivalent deuterium atom. Thus reduces peak overlaps and provides more detailed information of hydrogen chemical environment. The mainstream applications of 2H-SNIF-NMR have been in source attribution, forensics and biosynthetic pathway studies. (See also Gray's section "Source attribution and Forensics") When measuring sugar compounds, a timesaving strategy is to convert them into ethanol through fermentation because 2H-SNIF NMR for ethanol is well established. Several studies have proved that hydrogen isotopes on the methyl and methylene position of the resulting ethanol is not affected by either fermentation rate or media. Another example is the study of monoterpenes. since the 1980s SNIF-NMR study of α-pinene has found large variations in D/H ratios among its sites. Particularly ex-C2 position has a strong depletion (~-750‰), which was in disagreement with accepted biosynthetic mechanism (mevalonate mechanism) at that time, and lead to new development in pathways. More recently, Ina Ehlers published their work on the D6S/D6R ratios of glucose molecules. The stereochemical diteterium distribution was found to correlate to photorespiration/photosynthesis ratios. Photorespiration/photosynthesis ratios are driven by CO2 fertilization, thus this might lead to new proxies in reconstructing paleo-CO2 concentration. Work has also been done for long-chain fatty acids and found that even-numbered sites, which are thought to be derived from C2 position of the acetyl group, are more enriched in deuterium than odd-numbered hydrogen that come from C1 position of the acetyl group. Duan et al. reported a strong kinetic isotope effect(KIE) during the desaturation from oleic acid to linoleic acid. In summary, the underlying physics of the SNIF-NMR makes it capable of measuring isotopomers. Another advantage of doing NMR measurements over mass spectrometry is that it analyzes samples non-destructively. The 2H SNIF-NMR has been well industrialized in source identification and forensics, and has contributed much to biochemical pathway studies. The application of 2H SNIF-NMR to geological records is sporadic and still needs exploring. Intact molecular isotope ratio mass spectrometry Conventionally, mass spectrometry, such as Gas Chromatography-Mass Spectrometry(GC-MS) and Gas Chromatography -Time Of Flight(GC-TOF), is a common technique for analyzing isotopically labeled molecules. This method involves ionizing and analyzing isotopologues of an intact organic molecule of interest rather than its products of pyrolysis or conversion. However, it does not work for natural abundance hydrogen isotopes because conventional mass spectrometers do not have enough mass-resolving power to measure the 13C/D isotopologues of intact organic molecules or molecular fragments at natural abundance. For example, to resolve the single D substituted isotopologue peak of any hydrocarbons you will at have to be able to at least exclude single 13C substituted isotopologue peak, which sits at the same cardinal mass yet 0.0029 AMU lighter and is of orders of magnitude more abundant. Recent advances in analytical instruments enable direct measurement of natural abundance D/H ratios in organic molecules. The new instruments have the same framework as any conventional gas source IRMS, but incorporate new features such as larger magnetic sector, double focusing sectors, quadrupole mass filter and multi-collectors. Two commercial examples are the Nu Panorama and the Thermo Scientific 253 Ultra. These instruments generally have good sensitivity and precision. Using only tens of nanomoles of methane, the Ultra can achieve a stable high precision of around 0.1‰ error in δD. One of the first examples of this type of measurement has been the clumped isotopes of methane.(See section of "natural gas" in Fossil fuels) Another strength of this kind of instruments is the capability of doing site-specific isotopic ratio measurements. This technique is based on measuring D/H ratios of fragments from the ion source (e.g. CH3CH of propane molecule) that samples hydrogen atoms from different parts of the molecule. In summary, direct molecular mass-spectrometry has been commonly used to measure laboratory spiked isotope tracers. Recently advanced high resolution gas source isotope ratio mass spectrometers can measure hydrogen isotopes of organic molecules directly. These mass spectrometers can provide high precision and high sensitivity. The drawback of this type of instruments includes high cost, and standardization difficulty. Also, studying site-specific isotopes with mass spectrometry is less straightforward and needs more constraints than the SNIF-NMR method, and can only distinguish isotopologues but not isotopomers. Hydrologic cycle Isotope fractionation in the hydrological cycle Water is the primary source of hydrogen to all living organisms, so the isotopic composition of environmental water is a first-order control on that of the biosphere. The hydrological cycle moves water around different reservoirs on the surface of the earth, during which hydrogen isotopes in water are significantly fractionated. As the primary moisture source to the atmosphere, the ocean has a relatively uniform hydrogen isotope composition across the globe around 0‰ (VSMOW). Variations of δD larger than 10‰ in the ocean are generally confined to surface waters due to evaporation, sea ice formation, and addition of meteoric waters by precipitation, rivers or icebergs. In the hydrological cycle, the two major processes that fractionate hydrogen isotopes from ocean water are evaporation and condensation. It should be pointed out that oxygen isotopic composition (18O/16O) of water is also an important tracer in the hydrological cycle, and cannot be separated from hydrogen isotopes when we talk about isotope fractionation processes associated with water. During evaporation of water from the ocean to the atmosphere, both equilibrium and kinetic isotope effects occur to determine the hydrogen and oxygen isotopic composition of the resulting water vapor. At the water-air interface, a stagnant boundary layer is saturated with water vapor (100% relative humidity), and the isotopic composition of water vapor in the boundary layer reflects an equilibrium fractionation with liquid water. The liquid-vapor equilibrium fractionations for hydrogen and oxygen isotopes are temperature-dependent: (‰) (‰) The magnitude of the liquid-vapor equilibrium fractionation for hydrogen isotopes is approximately 8 times that of oxygen isotopes at earth surface temperatures, which reflects the relative mass differences of the two isotope systems (2H is 100% heavier than 1H, 18O is 12.5% heavier than 16O). Above the boundary layer, there is a transition zone with relative humidity less than 100%, and there is a kinetic isotope fractionation associated with water vapor diffusion from the boundary layer to the transition zone, which is empirically related to the relative humidity (h): ‰ ‰ The kinetic isotope effect associated with diffusion reflects the mass difference of the heavy-isotope substituted water molecules (HD16O and HO) relative to the normal isotopologue (HO). After water is evaporated to the atmosphere, it is transported and returned to the surface through condensation and precipitation. Condensation of water vapor occurs in ascending air masses that develop a lower temperature and saturation vapor pressure. Since the cooling and condensation happens at relatively slow rates, it is a process with equilibrium isotope effects. However, as water vapor is progressively condensed and lost from the air during moisture transport, the isotopic composition of the remaining vapor, as well as the resulting precipitation, can be largely depleted due to the process of Rayleigh distillation. The equation for Rayleigh distillation is: In the equation, R0 represents the isotope ratio in the initial water vapor, Rr represents the isotope ratio in the remaining water vapor after some condensation, f is the fraction of water vapor remaining in the air, and α is the liquid-vapor equilibrium fractionation factor (α=1+ε). The isotopic composition of the resulting precipitation (Rp) can be derived from the composition of the remaining vapor: As f decreases progressively during condensation, the remaining vapor becomes more and more depleted of the heavy isotopes, and the magnitude of depletion becomes larger as f approaches zero. The Rayleigh distillation process can explain some first-order spatial patterns observed in the isotopic composition of precipitation across the globe, including isotopic depletion from the tropics to the poles, isotopic depletion from coastal to inland regions, and isotopic depletion with elevation over a mountain range, all of which are associated with progressive moisture loss during transport. The Rayleigh distillation model can also be used to explain the strong correlation between δD and δ18O observed in global precipitation, expressed as the global meteoric water line (GMWL): δD=8δ18O+10 (later updated to δD=8.17±0.07 δ18O+11.27±0.65) The slope of the GMWL reflects the relative magnitude of hydrogen and oxygen isotope fractionation during condensation. The intercept of GMWL is non-zero (called deuterium-excess, or d-excess), which means ocean water does fall on GMWL. This is associated with the kinetic isotope effect during evaporation when water vapor diffuses from the saturated boundary layer to the unsaturated transition zone, and cannot be explained by the Rayleigh model. Nevertheless, the robust pattern in GMWL strongly suggests a single dominant moisture source to the global atmosphere, which is the tropical western Pacific. It should also be pointed out that a local meteoric water line can have a different slope and intercept from the GMWL, due to differences in humidity and evaporation intensity at different places. Hydrogen and oxygen isotopes in water thus serve as an excellent tracer of the hydrological cycle both globally and locally. Water isotopes and climate Based on the processes that fractionate isotopes in the hydrological cycle, isotopic composition of meteoric water can be used to infer related environmental variables such as air temperature, precipitation amount, past elevations, lake levels, as well as to trace moisture sources. These studies form the field of isotope hydrology. Examples of isotope hydrology applications include the following: Temperature reconstruction Isotopic composition of precipitation can be used to infer changes in air temperature based on the Rayleigh process. Lower temperature corresponds to lower saturation vapor pressure, which leads to more condensation that drives the residual vapor toward isotope depletion. The resulting precipitation thus has a more negative δD and δ18O value at lower temperature. This precipitation isotope thermometer is more sensitive at lower temperatures, and widely applied at high latitudes. For example, δD and δ18O were found to have a temperature sensitivity of 8‰/°C and 0.9‰/°C in Antarctic snow, and a sensitivity of 5.6‰/°C and 0.69‰/°C across Arctic sites. δD and δ18O of ice cores in Greenland, Antarctica and alpine glaciers are important archives of temperature change in the geological past. Precipitation amount effect In contrast to temperature control at high latitudes, the isotopic composition of precipitation in the tropics is mainly influenced by rainfall amount (negative correlation). This "amount effect" is also observed for summer precipitation in the subtropics. Willi Dansgaard, who first proposed the term "amount effect", suggested several possible reasons for the correlation: (1) As cooling and condensation progress, the rainfall isotopic composition reflects an integrated isotopic depletion by the Rayleigh process; (2) A small amount of rainfall is more likely to be influenced by evaporation and exchange with surrounding moisture, which tend to make it more isotopically enriched. At low latitudes, the amount effect for δ18O is around −1.6‰ per 100 mm precipitation increase at island stations, and −2.0‰ per 100 mm at continental stations. It was also noted that the amount effect was most pronounced when comparing isotopic composition of monthly precipitation at different places in the tropics. The amount effect is also expected for hydrogen isotopes, but there are not as many calibration studies. Across southeast Asia, the δD sensitivity to monthly precipitation amount varies between −15 and −25‰/100mm depending on location. In temperate regions, the isotopic composition of precipitation is dominated by rainfall amount in summer, but more controlled by temperature in the winter. The amount effect may also be complicated by changes in regional moisture sources. Reconstructions of rainfall amount in the tropics in the geological past are mostly based on δ18O of speleothems or δD of biogenic lipids, both of which are thought of as proxies for the isotopic composition of precipitation. Applications Isotope hydrology Hydrogen and oxygen isotopes also work as tracers for water budget in terrestrial reservoirs, including lakes, rivers, groundwater and soil water. For a lake, both the amount of water in the lake and the isotopic composition of the water are determined by a balance between inputs (precipitation, stream and ground water inflow) and outputs (evaporation, stream and ground water outflow). The isotopic composition of lake water can often be used to track evaporation, which causes isotope enrichment in the lake water, as well as a δD-δ18O slope that is shallower than the meteoric water line. The isotopic composition of river water is highly variable and have complicated sources over different timescales, but can generally be treated as a two-endmember mixing problem, a base-flow endmember (mainly ground water recharge) and an overland-flow endmember (mainly storm events). The isotope data suggest that the long-term integrated base-flow endmember is more important in most rivers, even during peak flows in summer. Systematic river isotope data were collected across the world by the Global Network of Isotopes in Rivers (GNIR).The isotopic composition of groundwater can also be used to trace its sources and flow paths. An example is a groundwater isotope mapping study in Sacramento, California, which showed lateral flow of river water with a distinct isotope composition into the groundwater that developed a significant water table depression due to pumping for human use. The same study also showed an isotopic signal of agricultural water being recharged into the giant alluvial aquifer in California's Central Valley. Finally, the isotopic composition of soil water is important for the study of plants. Below the water table, the soil has a relatively constant source of water with a certain isotopic composition. Above the water table, the isotopic composition of soil water is enriched by evaporation until a maximum at the surface. The vertical profile of isotopic composition of soil water is maintained by the diffusion of both liquid and vapor water. A comparison of soil water and plant xylem water δD can be used to infer the depth at which plant roots get water from the soil. Paleo-reconstruction Ice core records The isotopic composition of ice cores from continental ice sheets and alpine glaciers have been developed as temperature proxies since the 1950s. Samuel Epstein was one of the first to demonstrate the applicability of this proxy by measuring oxygen isotopes in Antarctic snow, and also pointed out complications in the stable isotope-temperature correlation caused by the history of the air masses from which the snow formed. Ice cores in Greenland and Antarctica can be thousands of meters thick and record snow isotopic composition of the past few glacial-interglacial cycles. Ice cores can be dated by layer counting on the top and ice flow modeling at depth, with additional age constraints from volcanic ash. Cores from Greenland and Antarctica can be aligned in age at high-resolution by comparing globally well-mixed trace gas (e.g. CH4) concentrations in the air bubbles trapped in the cores. Some of the first ice core records from Greenland and Antarctica with age estimates go back to the last 100,000 years, and showed a depletion in δD and δ18O in the last ice age. The ice core record has since been extended to the last 800,000 years in Antarctica, and at least 250,000 years in Greenland. One of the best δD-based ice core temperature records is from the Vostok ice core in Antarctica, which goes back to 420,000 years. The δD-temperature (of the inversion layer where snow forms) conversion in east Antarctica based on modern spatial gradient of δD (9‰/°C) is ΔTI=(ΔδDice-8Δδ18Osw)/9, which takes into account variations in seawater isotopic composition caused by global ice volume changes. Many local effects can influence ice δD in addition to temperature. These effects include moisture origin and transport pathways, evaporation conditions and precipitation seasonality, which can be accounted for in more complicated models. Nevertheless, the Vostok ice core record shows some very important results: (1) A consistent δD depletion of ~70‰ during the last four glacial periods compared to interglacial times, corresponding to a cooling of 8 °C in Antarctica; (2) A consistent drop of atmospheric CO2 concentration by 100 ppmv and CH4 drop by ~300 ppbv during glacial times relative to interglacials, suggesting a role of greenhouse gases in regulating global climate; (3) Antarctic air temperature and greenhouse gas concentration changes precede global ice volume and Greenland air temperature changes during glacial terminations, and greenhouse gases may be an amplifier of insolation forcing during glacial-interglacial cycles. Greenland ice core isotope records, in addition to showing glacial-interglacial cycles, also shows millennial-scale climate oscillations that may reflect reorganization in ocean circulation caused by ice melt charges. There have also been ice core records generated in alpine glacials on different continents. A record from the Andes Mountains in Peru shows a temperature decrease of 5-6 °C in the tropics during the last ice age. A record from the Tibetan plateau shows a similar isotope shift and cooling during the last ice age. Other existing alpine glacial isotope records include Mount Kilimanjaro in Tanzania, Mount Altai and West Belukha Plateau in Russia, Mount Logan in Canada, the Fremont Glacier in Wyoming, USA, and the Illimani Ice Core in Bolivia, most of which cover an interval of the Holocene epoch. Biomolecules The isotopic compositions of biomolecules preserved in the sedimentary record can be used as a proxy for paleoenvironment reconstructions. Since water is the primary hydrogen source for photoautotrophs, the hydrogen isotope composition of their biomass can be related to the composition of their growth water and thereby used to gain insight into some properties of ancient environments. Studying hydrogen isotopes can be very valuable, as hydrogen is more directly related to climate than other relevant stable isotope systems. However, hydrogen atoms bonded to oxygen, nitrogen, or sulfur atoms are exchangeable with environmental hydrogen, which makes this system less straightforward [ref to earlier H exchange section]. To study the hydrogen isotope composition of biomolecules, it is preferable to use compounds where the hydrogen is largely bound to carbon, and therefore not exchangeable on experimental timescales. By this criteria, lipids are a much better subject for hydrogen isotope studies than sugars or amino acids. The net fractionation between source water and lipids is denoted as εl/w, and can be represented as where w refers to the water, and l refers to the lipids. While the δD of source water is the biggest influence on the δD of lipids, discrepancies between fractionation factor values obtained from the slope and from the intercept of the regression suggest that the relationship is more complex than a two-pool fractionation. In other words, there are multiple fractionation steps that must be taken into account in understanding the isotopic composition of lipids. Cellulose The carbon-bonded hydrogen isotopic composition of cellulose, as inherited from leaf water, has the potential of preserving the original meteoric water signal. This was first demonstrated in the 1970s. In a systematic survey across North America, tree cellulose δD was found to have a temperature sensitivity of 5.8‰/°C, similar to precipitation δD sensitivity of 5.6‰/°C. This spatial correlation may be complicated by local effects of soil evaporation and leaf transpiration, and the spatial gradient may not be representative of temporal changes in tree ring cellulose at a single place. The mechanism that generates the δD signal in cellulose from meteoric water is not completely understood, but at least includes leaf water transpiration, synthesis of carbohydrates, synthesis of cellulose from photosynthetic sugars, and exchange of sugars with xylem water. Modeling studies show that observed tree ring cellulose δD can be produced when 36% of the hydrogen in sugars can exchange with xylem water, and effects such as humidity and rainfall seasonality may complicate the cellulose δD proxy. Despite these complications, tree ring δD have been used for paleoclimate reconstructions of the past few millennia. For example, a tree ring cellulose δD records from pine trees in the White Mountains, California shows a 50‰ depletion from 6800 year ago to present. The cooling trend since the mid-Holocene thermal maximum is consistent with ice core and pollen records, but the corresponding magnitude of cooling is elusive due to complicated influences from local effects such as humidity and soil water composition. The meaning of isotopes in cellulose and its applications is still an area of active study. Plant leaf waxes Terrestrial plants make leaf waxes to coat the surfaces of their leaves as an adaptation to minimize water loss. These waxes are largely straight-chain n-alkyl lipids. They are insoluble, non-volatile, chemically inert, and resistant to degradation, making them easily preserved in the sedimentary record, and therefore good targets as biomarkers. The main water source for terrestrial plants is soil water, which largely resembles the hydrogen isotope composition of rain water, but varies between environments and with enrichment by precipitation, depletion by evaporation, and exchange with atmospheric water vapor. There can be a significant offset between the δD value of source water and the δD value of leaf water at the site of lipid biosynthesis. No fractionation is associated with water uptake by roots, a process usually driven by capillary tension, with the one exception of xerophytes that burn ATP to pump water in extremely arid environments (with a roughly 10‰ depletion). However, leaf water can be substantially enriched relative to soil water due to transpiration, an evaporative process which is influenced by temperature, humidity, and the composition of surrounding water vapor. The leaf water hydrogen isotope composition can be described with a modified Craig-Gordon model, where ΔDe is the steady state enrichment of leaf water, εeq is the temperature-dependent equilibrium fractionation between liquid water and vapor, εk is the kinetic isotope effect from diffusion between leaf internal air space and the atmosphere, ΔDv is the leaf/air disequilibrium, ea is atmospheric vapor pressure, and ei is internal leaf vapor pressure. The Péclet effect, which describes the opposing forces of advection and diffusion can be added to the model as where E is the transpiration rate, L is the length scale of transport, C is the concentration of water, and D is the diffusion coefficient. While the role of rain water δD as the fundamental control on the final δD of lipids is well documented, the importance of fractionation effects from rain water to soil water and leaf water on εl/w is appreciated but remains poorly understood. Organic biomolecules are generally depleted relative to the δD of leaf water. However, differences between organisms, biosynthetic pathways, and biological roles of different molecules can lead to huge variability in fractionation; the diversity of lipid biomarkers spans a 600‰ range of δD values. Lipid biosynthesis is biochemically complex, involving multiple enzyme-dependent steps that can lead to isotope fractionations. There are three major pathways of lipid biosynthesis, known as the mevalonate pathway, the acetogenic pathway, and the 1-deoxyD-xylulose-5-phosphate/2-methylerythroyl-4-phosphate pathway. The acetogenic pathway is responsible for the production of n-alkyl lipids like leaf waxes, and is associated with a smaller δD depletion relative to source water than the other two lipid biosynthesis pathways. While leaf water is the main source of hydrogen in leaf biomolecules, relatively depleted hydrogen from acetate or NADPH is often added during biosynthesis, and contributes to the hydrogen composition of the final molecule. Secondary hydrogen exchange reactions, meaning hydrogenation and dehydrogenation reactions outside of the primary biosynthetic pathway, also contribute substantially to the variability of lipid hydrogen isotope composition It is important to note that biological differences in fractionation stem not only from biochemical differences between different molecules, but also from physiological differences between different organisms. For example, the δD values of multiple leaf wax molecules are enriched in shrubs (median ~ -90‰) relative to trees (median ~ -135‰), which themselves are enriched relative to both C3 (median ~ -160‰) and C4 grasses (median ~ -140‰). Between individual species, substantial variation of δD values have been documented. Other physiological factors that contribute to variable leaf wax δD values include the seasonal timing of leaf development, response to external stress or environmental variability, and the presence or absence of stomata It can be difficult to distinguish between physiological factors and environmental factors, when many physiological adaptations are directly related to environment. Several environmental factors have been shown to contribute to leaf wax δD variability, in addition to environmental effects on the δD of source water. Humidity is known to impact lipid δD values at moderate humidity levels, but not at particularly high (>80%) or low (<40%) humidity levels, and a broad trend of enriched δD values, meaning smaller εl/w, is seen in arid regions. Temperature and sunlight intensity, both correlated to geographic latitude, have strong effects on the rates of metabolism and transpiration, and by extension on εl/w. Additionally, the average chain length of leaf wax molecules varies with geographic latitude, and εl/w has been shown to increase with increasing chain length When using biomarkers as a proxy for reconstructing ancient environments, it is important to be aware of the biases inherent in the sedimentary record. Leaf matter incorporated into sediment is largely deposited during the autumn, so seasonal variations in leaf waxes must be considered accordingly. Furthermore, sediments average leaf waxes over lots of different plants in both space and time, making it difficult to calibrate the biological constraints on εl/w. Finally, preservation of biomolecules in the geologic record does not faithfully represent whole ecosystems, and there is always the threat of hydrogen exchange, particularly if the sediments are subjected to high temperatures. The hydrogen isotope composition of leaf waxes can be summarized as the δD of rain water, with three main fractionation steps- evaporation from soil water, transpiration from leaf water, and lipid biosynthesis, which can be combined and measured as the net fractionation, or εl/w. With the application of ever-improving measurement techniques for single molecules, and correlation with other independent proxies in the geological record that can help constrain some variables, investigating the hydrogen isotope composition of leaf waxes can be extremely productive. Leaf wax δD data has been successfully applied to improving our understanding of climate driven changes in terrestrial hydrology, by demonstrating that ocean circulation and surface temperature have a significant effect on continental precipitation. Leaf wax δD values have also been used as records of paleoaltimetry to reconstruct the elevation gradients in ancient mountain ranges based on the effect of altitude on rain water δD. Alkenones Another group of molecules frequently used in paleoreconstructions are alkenones, long-chain largely unsaturated lipids produced exclusively by coccolithophores. Coccolithophores are marine haptophyte algae, and include the globally iconic species Emiliania huxleyi, one of the main CaCO3 producers in the ocean. The δD values of alkenones are highly correlated to the δD values of sea water, and therefore can be used to reconstruct paleoenvironmental properties that constrain the isotopic composition of sea water. The most notable reconstruction that alkenone δD values are applied to is the salinity of ancient oceans. Both the δD values of sea water and the fractionations associated with hyptophyte biochemistry (εbio) are fairly well understood, so alkenones can be readily used to observe the secondary effect of salinity on δD. There is a well established positive linear correlation between salinity and εl/w, on the order of a ~3‰ change in fractionation per salinity unit. Hypothesized mechanisms for this effect include enrichment of D in intracellular water due to reduced exchange with extracellular water at higher salinity, removal of H from intracellular water due to increased production of solutes to maintain osmotic pressure at higher salinity, and lower haptophyte growth rates at higher salinity Alkenone δD values have been used successfully to reconstruct past salinity changes in the Mediterranean Sea., Black Sea, Panama Basin, and Mozambique Channel. As an extension of salinity, this data was also used to draw further conclusions about ancient environments, such as ancient freshwater flooding events, and the evolution of plankton in response to environmental changes Stable isotope paleoaltimetry The possibility of using water isotope depletion with elevation to reconstruct paleoaltimetry was demonstrated as early as the late 1960s, when Caltech geochemist Samuel Epstein tried to collect rainwater at different elevations in a single storm. The δ18O and δD lapse rates vary within -1 to -5‰/km and -10 to -40‰/km respectively, but can vary with locations and seasons, and are not exactly linear with altitude. One of the first studies in stable isotope paleoaltimetry demonstrated a meteoric water δD signature of -90 to -139‰ in fluid inclusions in quartz and adularia in an epithermal gold-silver deposit in Nevada, and suggested the applicability of stable isotopes in reconstruction of ancient topography in the Great Basin. The hydrogen and oxygen isotopes of hydrous silicate minerals have since then been used to reconstruct topographic histories in mountain ranges across the world, including the North American Cordillera, the Rocky Mountains, the Himalayas, the European Alps, and Southern Alps in New Zealand. Lab experiments with clay minerals have shown that the hydrogen and oxygen isotope compositions are relatively resistant to alteration at moderate temperature (<100 °C), and can preserve the original meteoric water signal. One important effect of mountain ranges on rainfall stable isotopes is the rain shadow effect, in which an isotopic depletion happens in precipitation on the leeward side compared to the windward side. A change in the difference in isotopic composition of precipitation on the two sides of a mountain can be used to infer the magnitude of the rain shadow effect. In one such study, an isotope enrichment was observed in smectite on the east side of the Sierra Nevada in California from mid-Miocene to late Pliocene, suggesting a decrease in elevation during this period. Another study found δD values around -140‰ in muscovite in the North America Cordillera during the early Eocene, which would suggest an elevation 1000m higher than today at the time. In addition to hydrous minerals, hydrogen isotopes in biomarkers such as leaf waxes have also been developed for paleoaltimetry studies. The δD lapse rate in leaf waxes (-21‰/km) falls in the range of meteoric water observations. As an example study, leaf wax δD data have been used to confirm hydrous mineral paleoaltimetry for the high elevation of the Sierra Nevada during the Eocene. Fossil fuels The hydrogen isotope composition of oil, gas and coal is an important geochemical tool to study the formation, storage, migration and many other processes. The hydrogen isotopic signal of fossil fuels results from both inheritance of source material and water as well as fractionations during hydrocarbon generation and subsequent alteration by processes such as isotopic exchange or biodegradation. When interpreting hydrogen isotopic data of sedimentary organic matter one must take all the processes that might have an isotope effect into consideration. Almost all of the organic hydrogen is exchangeable to some extent. Isotopic exchange of organic hydrogen will reorder the distribution of deuterium and often incorporate external hydrogen. Generally, more mature materials are more substantially exchanged. With effective exchange, aliphatic hydrogen can finally reach isotopic equilibrium at the final stage. Equilibrium fractionation factor varies among different hydrogen sites. For example, aliphatic hydrogen isotope fractionation depends on the carbon atom that the hydrogen atom bonds with. To the first order, alkyl hydrogen isotope composition follows this trend: δDPrimary carbon < δDSecondary carbon < δDTertiary carbon. The fractionation factors between carbon sites also decrease with increasing temperature. This can be potentially used as a thermo-history indicator. The fractionation between whole molecule and water can be estimated by averaging all hydrogen-positions, and this leads to a relatively small variation of equilibrium fractionation between different groups of hydrocarbons and water. A theoretical prediction estimated this to be −80‰ to −95‰ for steranes, −90‰ to −95‰ for hopanes, and −70‰ to −95‰ for typical cycloparaffins between 0 °C to 100 °C. At the temperature of the oil window and gas window, the equilibrium fractionation between different group of organic molecules is relatively small, as compared with large primary signals. The study of hydrogen isotopes of fossil fuels has been applied as proxies and tools in the following aspects: Reconstruction of paleoenvironments of the sources. Because of the high-sensitivity of D content of terrestrial water to hydrological cycles, organic δD can reflect the environment of source formation. To the first order, D/H ratios of coals and n-alkanes from oils have been demonstrated to correlate with paleolatitude. Source correlation. Marine and lacustrine environments are characterized by distinctly different δD values. Many studies have tried to relate measured δD with source types. For methane, D concentration and clumped isotopes is particularly diagnostic of sources. Possible maturity indicators. For example, isoprenoids synthesized by plants are strongly depleted in D(See "Observed variations in isotopic abundance" section), typically ~100‰ to n-alkyl lipids. This gap tends to decrease as rock matures because of the higher D/H exchange rates of isoprenoids. The correlation of δD difference between pristane, phytane and n-alkanes and other maturity indicators has been established across a wide maturity range. Another possible maturity indicator based on the "isotope slope" of δD vs. n-alkane chain length was proposed by Tang et al. Quantitative apportionment. Since alkanes are main components of oil and gas, the isotopic data of n-alkanes have been used to study their migration and mixing. The advantage of hydrogen isotopes over carbon is higher resolution because of larger fractionation. Studying the clumped isotopes of methane provides a new dimension of mixing-constraints. The mixing line in the clumped isotope notation space is a curve rather than a straight line. Fingerprinting pollutant/oil spills. Kerogens and coals The first stage that sedimentary organic matter (SOM) experiences after deposition is diagenesis. During diagenesis, biological decomposition can alter the D/H ratio of organics. Several experimental studies have shown that some biodegraded materials become slightly enriched in D(less than 50‰). Most organics become kerogen by the end of diagenesis. Generally, δD of kerogen spans a wide range. Many factors contribute to the kerogen we observe in geological records, including: Source water hydrogen isotope patterns: for example, lake systems are more sensitive to hydrological cycles than marine environments. Differential fractionation for various organisms and metabolic pathways: differences in organic composition can also reflect in primary signal. Isotopic exchange, H loss and H addition: this can involve mixing water-derived D with the primary signal. Generation of bitumen, oil and gas: there's a fractionation between the product and kerogen. Research on the Australian basins showed that δD of lacustrine algal sourced kerogen with terrestrial contributions varies from −105‰ to −200‰, and δD of kerogen from near-coastal depositional environment has a narrower range from −75‰ to −120‰. The smaller span in D/H ratios of coastal kerogen is thought to reflect the relatively stable regional climate. Pedentchouk and his colleagues reported δD values of -70‰ to -120‰ in immature to low mature kerogen from early Cretaceous lacustrine sediments in West Africa. Coals are from type III kerogen mostly derived from terrestrial plants, which should have a primary D/H signal sensitive to local meteoric water. Reddings et al. analyzed coals from various origins and found them randomly scatter across the range of −90‰ to −170‰. Rigby et al. found D contents decrease from -70‰ to -100‰ with increasing maturity in coal from Bass Basin and attributed this to latter exchange with low D water. Smith et al. studied H isotopes of coal samples from Antarctica and Australia. They found a strong negative correlation between δD and inferred paleolatitude. For coal samples originating from near equatorial regions, δD is around −50‰, while for those originating from polar regions, δD is around −150‰. This δD trend along latitude is consistent meteoric water trend and thus is an evidence that coals can preserve much of the original signals. There are two types of approach to study the alteration of D/H ratios of kerogen during catagenesis: 1)laboratory incubation of organic matter that enables mechanistic study with controlled experiments. 2)natural sample measurement that provides information of combined effects over geological timescales. The complex composition and chemistry of kerogen complicates the results. Nevertheless, most research on hydrogen isotopes of kerogen show D enrichment with increasing maturity. Type II kerogen(marine derived) from New Albany Shale is reported to have δD rise from −120‰ to −70‰ as vitrinite reflectance increase from 0.3% to 1.5%. Two main mechanisms have been proposed for enrichment. One of them is kinetic fractionation during hydrocarbon generation while the other is isotopic exchange with surrounding water. Anhydrous incubation experiments have shown that the products are generally more depleted in D than their precursors, causing enrichment in residual kerogen. Schimmelmann et al. studied the relationship between terrestrially-derived oil and their source rock kerogens from four Australian Basins. They found that on average the oil is depleted to corresponding kerogen by 23‰. Hydrous incubation experiments suggest that 36–79% of bulk organic hydrogen may come from water at moderate maturity. While still under debate, it appears likely that incorporation of water hydrogen isotopes is the more dominant process for kerogen D- enrichment during catagenesis. In summary, D content of kerogen and coals are complicated and hard to resolve due to the complex chemistry. Nevertheless, studies have found the possible correlation between coal δD and paleo-latitude. Natural gas Commonly, hydrogen isotope composition of natural gas from the same well has a trend of δDmethane<δDethane<δDpropane<δDC4+. This is because most natural gas is thought to generated by step-wise thermal cracking that is mostly irreversible and thus governed by normal kinetic isotope effects that favor light isotopes. The same trend, known as "the normal order", holds for carbon isotopes of natural gas. For example, Angola gas is reported to have a methane δD range of −190‰ to −140‰, an ethane δD of −146‰ to −107‰, a propane δD of −116‰ to −90‰, and a butane δD of −118‰ to −85‰. However, some recent studies show that opposite patterns could also exist, meaning δDmethane>δDethane>δDpropane. This phenomenon is often referred to as 'isotopic reversal' or 'isotopic rollover'. The isotopic order could also be partially reversed, like δDmethane>δDethane<δDpropane or δDmethane<δDethane>δDpropane. Burruss et al. found that in the deepest samples of northern Appalachian basin the hydrogen isotopic order for methane and ethane is reversed. Liu et al., also found partial reversal in oil-related gas from the Tarim Basin. The mechanism causing this reversal is still unknown. Possible explanations include mixing between gases of different maturities and sources, oxidation of methane, etc. Jon Telling et al., synthesized isotopically reversed (in both C and H) low-molecular alkanes using gas-phase radical recombination reactions in electrical discharge experiments, providing another possible mechanism. Methane is a major component of natural gas. Geosphere methane is intriguing for the large input of microbial methanogenesis. This process exhibits a strong isotope effect, resulting in greater D-depletion in methane relative to other hydrocarbons. δD ranges from −275‰ to −100‰ in thermogenic methane, and from −400‰ to −150‰ in microbial methane. Also, methane formed by marine methanogens is generally enriched in D relative to methane from freshwater methanogens. δD of methane has been plotted together with other geochemical tools(like δ13C, gas wetness) to categorize and identify natural gas. A δD-δ13C diagram (sometimes referred to as CD diagram, Whiticar diagram, or Schoell diagram ) is widely used to place methane in one of the three distinct groups: thermogenic methane that is higher in both δ13C and δD; marine microbial methane that is more depleted in 13C and freshwater microbial methane that is more depleted in D. Hydrogenotrophic methanogenesis produces less D-depleted methane relative to acetoclastic methanogenesis. The location where the organism lives and substrate concentration also affect isotopic composition: rumen methanogenesis, which occurs in a more closed system and with higher partial pressures of hydrogen, exhibits a greater fractionation (−300 to −400‰) than wetland methanogenesis (−250 to −170‰). Recent advances in analytical chemistry have enabled high-precision measurements of multiply substituted isotopologues (or 'clumped isotopologues') like 13CH3D. This emerge as a novel tool for studying methane formation. This proxy is based on the abundance of clumped isotopologues of methane, which should be enriched compared to the stochastic distribution at thermodynamic equilibrium because the reduced zero-point energy for heavy-heavy isotope bonding is more than twice the reduced zero-point energy of heavy-light isotope bonding. The extent of enrichment decreases with increasing temperature, as higher entropy tends to randomize isotope distribution. Stolper et al. established this temperature calibration using laboratory equilibrated methane and field methane from known formation temperature, and applied this to several gas reservoirs to study natural gas formation and mixing. Wang et al. also reported strong non-equilibrium isotope effect in methane clumped isotopes from lab-cultured methanogens and field samples. These methane samples have relatively low abundance of clumped isotopologues, sometimes even lower than the stochastic distribution. This indicates that there are irreversible steps in enzymatic reactions during methanogenesis that fractionation against clumped isotopologues to create the depleted signal. Isotope clumping in methane has proven a robust proxy, and scientists are now moving towards higher-order alkane molecules like ethane for further work. Oil Oil is generally a product of thermal breakdown of type I and type II kerogen during catagenesis. The deuterium content should reflect the source kerogen signal, generation fractionation, isotopic exchange and other maturation effects. The thermal maturation at the oil window can erase much of the primary signals of hydrogen isotopes. The formation of oil involves breaking C-C and C-H bonds, resulting in depletion of 13C and D in the products and enrichment in the residual reactants due to kinetic isotope effects. Yongchun Tang and his colleagues modeled this process based on laboratory-calibrated kinetics data and found that the frequency factor ratio for D/H is 1.07. Moreover, oil is also affected by the isotope fractionation from phase-change. However, the behavior of oil gas-liquid fractionation differs from water as the vapor phase of oil is enriched in D. This will deplete residual oil as it gets evaporated. Biological degradation of oil is also expected to fractionate hydrogen isotopes, as enzymatic breaking of C-H bond have a normal kinetic isotope effect. Several degradation experiments show that this fractionation is generally mild, ranging from −11‰ to −79‰. This process should also enrich partially degraded oil. Finally, oil stored in a reservoir often had migrated through subsurface (also known as "geochromatography") from another source region, interacting with water. No data has been published to confirm the fractionation associated with migration, yet theoretical prediction shows that this is likely to be very small. Many studies of natural samples have shown slight increases in δD with thermal maturity. Amane Waseda reported δD of oil samples in northeast Japan to increase from around −130‰ to around −110‰ with higher maturity. At low thermal maturity, dos Santos Neto and Hayes reported δD of saturate fraction of oil in Portiguar Basin derived from a lacustrine environment is -90‰, and from a marine-evaporitic environment is -120‰ to −135‰. The bulk analysis of oil, which yields a complicated mixture of organic compounds, obscures much of the valuable information. Switching to compound-specific study greatly expanded our understanding of hydrogen isotopes of oil. Analyzing deuterium content at the compound level avoids problems from differences in exchange rates, simplifies sources and products relationships, and draws a much more detailed picture. δD of n-alkanes are generally thought to be representative of oil as they are the major components. Schimmelmann et al. confirmed that alkane fractions have almost the same D/H ratios as whole oils. Depending on source material type and maturity, δD of n-alkanes can vary from −100‰ to −180‰. A common phenomenon of various oil and matured rock derived n-alkanes is a trend of increasing δD with chain length. For example, Li et al. analyzed oils from the Western Canada Sedimentary Basin and found δD increased between 20‰ and 40‰ from C13 to C27. This "isotope slope" is an artifact of kinetic fractionation associated with thermal cracking of carbon chains. This trend has been experimentally reproduced and theoretically modeled by Tang et al. N-alkanes are also known to preserve detailed information of source material. Li et al. studied oils from the marine-derived Upper Cretaceous Second White Speckled Shale and found strong depleted signal around −180‰ in C12-C18. The low δD of this marine samples was explained by the discharge of a large high latitude river. Schimmelmann et al. found that the δD of the oil sampled from coaly facies of the Crayfish group reaches down to −230‰ where as those sampled from algal facies of the same group are around −100‰. Such huge variation is hard to be explained by any other causes than Australia splitting from the Antarctic continent during late Cretaceous. Another special case reported by xiong et al. studied Ordovician carbonates from Bohai Bay Basin. They found big differences between δD of n-alkanes exists, reflecting that the original signal is preserved rather than being homogenized. The result is not obvious as the sample is very mature(inferred vitrinite reflectance R0 up to 2.3). Thus this is a strong evidence that carbonate systems have much lower catalytic efficiency of hydrogen exchange on hydrocarbons. Strong enrichment(~40‰) in odd carbon numbered alkanes to even carbon numbered alkanes is also found in some subset of samples and the reason is unclear at this point. This odd-even effect is also observed in immature clastic sediments. Ecohydrology The field of ecohydrology is concerned with the interaction between ecosystems and water cycling, from measuring the small scale drainage of water into soil to tracking the broad movements of water evaporating off from trees. Because deuterium acts as a conservative tracer, it works well for tracking water movement through plants and ecosystems. Although water movement in single-process phenomena such as evaporation is relatively simple to track, many systems (e.g. cloud forests) in the environment have multiple sources and tracking water movement becomes more complicated. Isotope spiking can also be done to determine water transport through soil and into plants via injection of deuterated water directly into the ground. Stable isotope analysis of xylem water can be used to follow the movement of water from soil into the plants and therefore provide a record of the depth of water acquisition. An advantage to using xylem water is that in theory, the deuterium content should directly reflect the input water without being affected by leaf transpiration. For example, Dawson and Ehleringer used this approach to determine whether trees that grow next to streams are using the surface waters from that stream. Water from the surface would have the same isotopic composition as the stream, while water from farther below in the ground would be from past precipitation inputs. In this case, younger trees had a xylem water isotopic composition very close to the adjacent stream and likely used surface waters to get established. Older trees had depleted xylem water relative to the stream, reflecting that they source their water from deeper underground. Other stable isotope studies have also determined that plants in redwood forests do not just take up water from their roots but acquire a significant proportion of water via stomatal uptake on leaves. Plant water can be used to characterize other plant physiological processes that affect the hydrologic cycle; for example, leaf waters are widely used for modeling transpiration and water-use efficiency. In transpiration, the Craig-Gordon model for lake water enrichment through evaporation has been found experimentally to fit well for modelling leaf water enrichment. Transpiration can be measured by direct injection of deuterated water into the base of the tree, trapping all water vapor transpired from the leaves and measuring the subsequent condensate. Water use can also be measured and is calculated from a heavy water injection as follows: , Where WU is the water use in kilograms/day, M is the mass of deuterated water injected in grams, T is the final day of the experiment, Ci is the concentration of deuterium at time interval i in grams/kilogram, and Δti is the length of time interval i in days. Although the calculated water use via thermal dissipation probing of some tropical plants such as bamboos correlates strongly with measured water use found by tracking D2O movement, the exact values are not the same. In fact, with leguminous tree species Gliricidia sepium, which produces a heartwood, transpired water did not even correlate strongly with injected D2O concentrations, which would further complicate water use measurements from direct injections. This possibly occurred because heartwoods could accumulate heavy water rather than move the water directly through xylem and to leaves. Water use efficiency (WUE), which is the ratio of carbon fixation to transpiration, has previously been associated with 13C/12C ratios using the following equation: , In this equation, \Delta^{13}C\ =\ \frac{\delta^{13}C_{atm}-\delta^{13}C_{plant}}{1+\delta^{13}C_{atm}}, φ is the fraction of fixed carbon that is respired, p is the partial pressure of in the atmosphere, εcarb is the fractionation of carboxylation, and εdiff is the fractionation of diffusion in air. The relation of δD in plant leaf waxes to has been empirically measured and results in a negative correlation of δD to water use efficiency. This can be explained in part by lower water use efficiency being associated with higher transpiration rates. Transpiration exhibits a normal isotope effect, causing enrichment of deuterium in plant leaf water and therefore enrichment of leaf waxes. Ecology Migration patterns Deuterium abundance can be useful in tracking migration of various animals. Animals with metabolically inert tissue (e.g. feathers or hair) will have synthesized that tissue using hydrogen from source water and food but ideally not incorporate subsequent water over the course of the migration. Because δD tends to vary geographically, the difference between animal tissue δD and post-migration water δD, after accounting for the biological fractionation of assimilation, can provide information regarding animal movement. In monarch butterflies, for example, wing chitin is metabolically inert after it has been built, so it can reflect the isotopic composition of the environmental water at the time and location of wing growth. This then creates a record of butterfly origin and can be used to determine migration distance. This approach can also be used in bats and birds, using hair and feathers, respectively. Since rainwater becomes depleted as elevation is increased, this method can also track altitudinal migration. However, this is technically difficult to do, and the resolution appears to be too poor to track small altitudinal changes. Deuterium is most useful in tracking movement of species between areas with large continental water variation, because species movement can be complicated by the similarity of local water δD values between different geographic regions. For example, source water from Baja California may have the same δD as water from Maine. Further, a proportion of the hydrogen isotope composition within the tissue can exchange with water and complicate the interpretation of measurements. In order to determine this percentage of isotopic exchange, which varies according to local humidity levels, standards of metabolically inert tissue from the species of interest can be constructed and equilibrated to local conditions. This allows measured δD from different geographic regions to be compared against each other. Trophic interactions Assimilation of diet into tissue has a tissue-specific fractionation known as the trophic discrimination factor. Diet sources can be tracked through a food web via deuterium isotope profiles, although this is complicated by deuterium having two potential sources – water and food. Food more strongly impacts δD than does exchange with surrounding water, and that signal is seen across trophic levels. However, different organisms derive organic hydrogen in varying ratios of water to food: for example, in quail, 20-30% of organic hydrogen was from water and the remainder from food. The precise percentage of hydrogen from water was dependent on tissue source and metabolic activity. In chironomids, 31-47% of biomass hydrogen derived from water, and in microbes as much as 100% of fatty acid hydrogen can be derived from water depending on substrate. In caterpillars, diet δD from organic matter correlates linearly with tissue δD. The same relationship does not appear to hold consistently for diet δD from water, however – water derived from either the caterpillar or its prey plant is more deuterium enriched than their organic material. Going up trophic levels from prey (plant) to predator (caterpillar) results in an isotopic enrichment. This same trend of enrichment is seen in many other animals - carnivores, omnivores, and herbivores - and appears to follow 15N relative abundances. Carnivores at the same trophic level tend to exhibit the same level of 2H enrichment. Because, as mentioned earlier, the amount of organic hydrogen produced from water varies between species, a model of trophic level related to absolute fractionation is difficult to make if the participating species are not known. Consistency in measuring the same tissues is also important, as different tissues fractionate deuterium differently. In aquatic systems, tracking trophic interactions is valuable for not only understanding the ecology of the system, but also for determining the degree of terrestrial input. The patterns of deuterium enrichment consistent within trophic levels is a useful tool for assessing the nature of these interactions in the environment. Microbial metabolism Biological deuterium fractionation through metabolism is very organism and pathway dependent, resulting in a wide variability in fractionations. Despite this, some trends still hold. Hydrogen isotopes tend to fractionate very strongly in autotrophs relative to heterotrophs during lipid biosynthesis - chemoautotrophs produce extremely depleted lipids, with the fractionation ranging from roughly −200 to −400‰. This has been observed both in laboratory-grown cultures fed a known quantity of deuterated water and in the environment. Proteins, however, do not follow as significant a trend, with both heterotrophs and autotrophs capable of generating large and variable fractionations. In part, kinetic fractionation of the lighter isotope during formation of reducing equivalents NADH and NADPH result in lipids and proteins that are isotopically lighter. Salinity appears to play a role in the degree of deuterium fractionation as well; more saline waters affect growth rate, the rate of hydrogen exchange, and evaporation rate. All of these factors influence lipid δD upon hydrogen being incorporated into biomass. In coccolithophores Emiliania huxleyi and Gephyrocapsa oceanica, alkenone δD has been found to correlate strongly to organism growth rate divided by salinity. The relationship between deuterium fractionation and salinity could potentially be used in paleoenvironment reconstruction with preserved lipids in the rock record to determine, for example, ocean salinity at the time of organismal growth. However, the degree of fractionation is not necessarily consistent between organisms, complicating the determination of paleosalinity with this method. There also appears to be a negative correlation between growth rate and fractionation in these coccolithophores. Further experiments on unicellular algae Eudorina unicocca and Volvox aureus show no effect of growth rate (controlled by nitrogen limitation) on fatty acid δD. However, sterols become more D-depleted as growth rate increases, in agreement with alkenone isotopic composition in coccolithophores. Overall, although there are some strong trends with lipid δD, the specific fractionations are compound-specific. As a result, any attempt to create a salinometer through δD measurements will necessarily be specific to a single compound type. Environmental chemistry An important goal of environmental chemistry is tracing the source and degradation of pollutants. Various methods have been employed for fingerprinting pools of environmental pollutants such as the bulk chemical composition of a spill, isotope ratios of the bulk chemical mixture, or isotope ratios of individual constituent compounds. Stable isotopes of carbon and hydrogen can be used as complementary fingerprinting techniques for natural gas. Recently, the D/H ratio of hydrocarbons from the Deepwater Horizon oil spill was used to verify that their origin was likely from the Macondo well. Hydrogen isotope ratios have also been used as a measure of the relative amount of biodegradation that has occurred in oil reservoirs in China, and studies on pure cultures of n-alkane degrading organisms have shown a chain-length dependence on the amount of hydrogen isotope fractionation during degradation. Additional studies have also shown hydrogen isotope effects in the degradation of Methyl tert-butyl ether and Toluene that have been suggested to be useful in the evaluation of the level of degradation of these polluting compounds in the environment. In both cases the residual unreacted compounds became enriched in deuterium to a few tens of per mil, with variations exhibited between different organisms and degree of reaction completeness. These observations of heavy residual compounds have been applied to field observations of biodegradation reactions such as the removal of benzene and ethylbenzene, which imparted a D/H fractionation of 27 and 50 per mil, respectively. Additionally analysis of o-xylene in a polluted site showed high residual D/H ratios after biodegradation, consistent with activation of C-H bonds being a rate limiting step in this process Source attribution and forensics Stable isotope ratios have found uses in various instances where the authenticity or origin of a chemical compound is called into question. Such situations include assessing the authenticity of food, wine and natural flavors; drug screening in sports (see doping); pharmaceuticals; illicit drugs; and even helping identify human remains. In these cases it is often not sufficient to detect or quantify a certain compound, since the question is the origin of the compound. The strength of hydrogen isotope analysis in answering these questions is that in many cases the D/H ratio of a natural product is related to the natural water D/H values in the area where the product was formed (see: Hydrologic cycle). Since D/H ratios vary significantly between different geographic areas, this can serve as a powerful tool in locating the original source of many different substance. Food and flavor authentication Foods, flavorings and scents are often sold with the guarantee that chemical additives come from natural sources. This claim becomes difficult to evaluate when the chemical compound has a known structure and is readily synthesized in the lab. Authentication of claims regarding the origins of these chemicals has made good use of various stable isotopes, including those of hydrogen. Combined carbon and hydrogen isotope analysis has been used to test the authenticity of (E)-methyl cinnamate, γ-decalactone and δ-decalactone. Hydrogen and nitrogen isotope ratios have been used for the authentication of alkylpyrazines used as "natural" coffee flavorings. Doping The isotope ratio of carbon in the steroids of athletes has been used to determine whether these steroids originated from the body of the athlete or an exogenous source. This test has been used in a number of high-profile anti-doping cases and has various benefits over simply characterizing the concentration of various compounds. Attempts are being made to create similar tests based on stable hydrogen isotopes which could be used to complement the existing testing methods. One concern with this method was that the natural steroids produced by the human body may vary significantly based on the deuterium content of drinking water, leading to false detection of doping based on hydrogen isotope differences. This concern has been addressed in a recent study which concluded that the effect of D/H ratio of drinking water did not pose an insurmountable source of error for this anti-doping testing strategy. Pharmaceutical copies The pharmaceutical industry has revenues in the hundreds of billions of dollars a year globally. With such a large industry counterfeiting and copyright infringement are serious issues, and hydrogen isotope fingerprinting has become a useful tool in verifying the authenticity of various drugs. As described in the preceding sections, the utility of D/H ratios highest when combined with measurements of other isotope ratios as well. In an early study on the stable isotope compositions of tropicamide, hydrocortisone, quinine and tryptophan, carbon, nitrogen, oxygen and hydrogen stable isotopes were analyzed by EA-IRMS and clear distinctions were able to be made between manufacturers and even batches of the drugs based on their isotope signatures. In this study it was determined that the hydrogen and oxygen isotope ratios were the two best fingerprints for distinguishing between different drug sources. A follow-up study analyzing naproxen from various lots and manufacturers also showed similar ability to distinguish between sources of the drugs. The use of these isotope signatures could not only be used to distinguish between different manufacturers, but also between different synthetic pathways for the same compound. These studies relied on the natural variations that occurred in the synthesis of these drugs, but other studies have used starting ingredients that are intentionally labelled D and 13C, and showed that these labels could be traced into the final pharmaceutical product. D/H ratios can also be determined for specific sites in a drug by 2H NMR, and has been used to distinguish between different synthetic methods for ibuprofen and naproxen in one study, and prozac and fluoxetine in another. These studies show that bulk D/H ratio information for EA-IRMS, and site-specific D/H ratios from 2H NMR have great utility for pharmaceutical drug authenticity testing. Illicit drugs The sources and production mechanisms of illegal drugs has been another area that has seen successful application of hydrogen isotope characterization. Usually, as with other applications of stable isotope techniques, results are best when combination of multiple stable isotopes are compared with one another. δH, δC and δN have been used together to analyze tablets of MDA and MDMA and has successfully identified differences which could be used to differentiate between different sources or production mechanisms. The same combination of stable isotopes with the addition of δO was applied to heroin and associated packaging and could successfully distinguish between different samples. Analysis using deuterium NMR was also able to shed light on the origin and processing of both cocaine and heroin. In the case of heroin this site-specific natural isotopic fraction measured by deuterium NMR (SNIF-NMR) method could be used for determining the geographic origin of the molecule by analyzing so-called natural sites (which were present in the morphine from which heroin is made), as well as gaining information on the synthesis process by analyzing the artificial sites (added during drug processing). Provenance of human remains The geographical variation in D/H ratio in human drinking water is recorded in hair. Studies have shown a very strong relation between an individuals' hair and drinking water D/H ratios. Since tap water D/H ratio has a strong dependence on geography, a persons hair D/H ratio can be used to determine regions in which they were most likely living during hair growth. This idea has been used in criminal investigations to try and constrain the movements of a person prior to their death, in much the same way D/H ratios have been used to track animal migrations. By analyzing sections of hair of varying ages it is possible to determine in what D/H regions a person was living at a specific time prior to their death. See also Abundance of chemical elements Hydrogen isotope Natural abundance Stable isotopes References External links www.example.com Biochemistry methods Isotopes of hydrogen Biogeochemistry Limnology Chemical oceanography
31953061
https://en.wikipedia.org/wiki/Devfsd
Devfsd
devfsd is a device manager for the Linux kernel. Primarily, it creates device nodes in the directory when kernel drivers make the underlying hardware accessible. The nodes exist in a virtual device file system named devfs. In systems that support many different types of hardware, each of which has its own device nodes, this is more convenient than creating all possible device nodes beforehand and in a real filesystem. While devfs was a step forward, it had several disadvantages of its own. Since version 2.5 of the Linux kernel, devfs has been succeeded by udev and devtmpfs. See also sysfs References External links Computer configuration Linux file system-related software Linux kernel-related software Unix file system-related software
650560
https://en.wikipedia.org/wiki/Dido%20and%20Aeneas
Dido and Aeneas
Dido and Aeneas (Z. 626) is an opera in a prologue and three acts, written by the English Baroque composer Henry Purcell with a libretto by Nahum Tate. The dates of the composition and first performance of the opera are uncertain. It was composed no later than July 1688, and had been performed at Josias Priest's girls' school in London by the end of 1689. Some scholars argue for a date of composition as early as 1683. The story is based on Book IV of Virgil's Aeneid. It recounts the love of Dido, Queen of Carthage, for the Trojan hero Aeneas, and her despair when he abandons her. A monumental work in Baroque opera, Dido and Aeneas is remembered as one of Purcell's foremost theatrical works. It was also Purcell's only true opera, as well as his only all-sung dramatic work. One of the earliest known English operas, it owes much to John Blow's Venus and Adonis, both in structure and in overall effect. The influence of Cavalli's opera Didone is also apparent. Both works use the prologue/three acts format and there are similarities between, for instance, Mercury's solo in Didone and the solo "Come away fellow sailors" in Purcell's work. Background and context Before Dido and Aeneas, Purcell had composed music for several stage works, including nine pieces for Nathaniel Lee's Theodosius, or The Force of Love (1680) and eight songs for Thomas d'Urfey's A Fool's Preferment (1688). He also composed songs for two plays by Nahum Tate (later the librettist of Dido and Aeneas), The Sicilian Usurper (1680) and Cuckold's Haven (1685). Dido and Aeneas was Purcell's first (and only) all-sung opera and derives from the English masque tradition. Libretto Originally based on Nahum Tate's play Brutus of Alba, or The Enchanted Lovers (1678), the opera is likely, at least to some extent, to be allegorical. The prologue refers to the joy of a marriage between two monarchs, which could refer to the marriage between William and Mary. In a poem of about 1686, Tate alluded to James II as Aeneas, who is misled by the evil machinations of the Sorceress and her witches (representing Roman Catholicism, a common metaphor at the time) into abandoning Dido, who symbolises the British people. The same symbolism may apply to the opera. This explains the addition of the characters of the Sorceress and the witches, which do not appear in the original Aeneid. It would be noble, or at least acceptable, for Aeneas to follow the decree of the Gods, but not so acceptable for him to be tricked by ill-meaning spirits. Although the opera is a tragedy, there are numerous seemingly lighter scenes, such as the First Sailor's song, "Take a boozy short leave of your nymphs on the shore, and silence their mourning with vows of returning, though never intending to visit them more." Musicologist Ellen T. Harris considers the callousness and cynicism of the song to underline the "moral" of the story, that young women should not succumb to the advances and promises of ardent young men. Score No score in Purcell's hand is extant, and the only seventeenth-century source is a libretto, possibly from the original performance. The earliest extant score, held in the Bodleian Library, was copied no earlier than 1750, well over sixty years after the opera was composed. No later sources follow the act divisions of the libretto, and the music to the prologue is lost. The prologue, the end of the act 2 'Grove' scene, and several dances, were almost certainly lost when the opera was divided into parts to be performed as interludes between the acts of spoken plays in the first decade of the eighteenth century. The first of the arias to be published separately was "Ah, Belinda" in Orpheus Britannicus. The most famous aria of the work is "When I am laid in earth", popularly known as "Dido's Lament". Both arias are formed on a lamento ground bass. "Dido's Lament" has been performed or recorded by artists far from the typical operatic school, such as Klaus Nomi (as "Death"), Ane Brun and Jeff Buckley. It has also been transcribed or used in many scores, including the soundtrack to the HBO miniseries Band of Brothers (renamed "Nixon's Walk"). It is played annually by a military band at the Cenotaph remembrance ceremony, which takes place on the Sunday nearest to 11 November (Armistice Day) in London's Whitehall. The music is sometimes thought to be too simple for Purcell in 1689, but this may simply reflect that the intended performers were schoolchildren. The work is scored for four-part strings and continuo. The fact that the libretto from the Chelsea School performance indicates two dances for guitar, the "Dance Gittars Chacony" in act 1, and the "Gittar Ground a Dance" in the 'Grove' scene of act 2, has led one scholar to suggest that Purcell envisaged a guitar as a primary member of the continuo group for the opera. Music for neither of these dances is extant, and it seems likely that Purcell did not compose them, but rather left them to be improvised by the guitarist. Several editions of the opera have been made and have been provided with a continuo realisation; a notable, if rather idiosyncratic edition being that made by Imogen Holst and Benjamin Britten. There are a number of editions with realisations, and the opera's accessibility to amateur performers is a feature that has greatly abetted the growth of its popularity in the latter half of the twentieth century. While the Prologue's music has been lost and has not been reconstructed, several realisations of the opera include a solution to the missing ritornello at the end of the second act. Known to have been part of the score, it is now performed as a dance taken from other, similar works by Purcell, or invented outright in the same vein, to keep the integrity and continuity of the performance. Performance history Premiere and early revivals A letter from the Levant merchant Rowland Sherman associates Dido and Aeneas with Josias Priest's girls' school in Chelsea, London no later than the summer of 1688. The first performance may have taken place as early as 1 December 1687, and evidence suggests that the opera was performed at the school again in 1689. Several scholars have argued that the work was composed for the English court, either for Charles II (and perhaps as early as 1684) or for James II. Following the Chelsea performances, the opera was not staged again in Purcell's lifetime. Its next performance was in 1700 as a masque incorporated into Beauty the Best Advocate, an adapted version of Shakespeare's Measure for Measure at Thomas Betterton's theatre in London. After 1705 it disappeared as a staged work, with only sporadic concert performances, until 1895 when the first staged version in modern times was performed by students of the Royal College of Music at London's Lyceum Theatre to mark the bicentenary of Purcell's death. Dido and Aeneas received its first performance outside England on 14 December 1895 in a concert version at the University Society in Dublin. 20th- and 21st-century performances Dido and Aeneas premiered in the United States at the Plaza Hotel in New York City on 10 February 1923 performed by the girls of the Rosemary School, although The New York Times noted that "considerable liberties" had been taken with the score. A concert version with professional musicians organised by the Society of Friends of Music took place on 13 January 1924 at the New York City Town Hall, using a score edited by Artur Bodanzky, who also conducted the performance. As new critical editions of the score appeared, and with the revival of interest in Baroque music, the number of productions steadily increased. After Jonathan Miller's visit to Bornholm, Denmark, Dido was performed in 2007 at the Rønne Theatre, which had been built in 1823. Kevin Duggan conducted. Amongst the new productions of the opera in 2009, the 350th anniversary of Purcell's birth, were those staged by the De Nederlandse Opera, the Royal Opera, London, the Divertimento Baroque Opera Company, and Glimmerglass Opera in Cooperstown, New York. The Royal Opera production, which featured contemporary dance by Wayne McGregor Random Dance and animated effects by Mark Hatchard, formed part of a double bill with Handel's Acis and Galatea. In 2011 the opera was revived by City Wall Productions and set during World War II. A new Opera North production of the opera opened at Leeds Grand Theatre in February 2013. Opera Up Close performed a truncated version in 2011, setting it in an American high school in the 1950s. Adaptations A version of the opera adapted to modern dance was choreographed by the American Mark Morris, who originally danced both the roles of Dido and the Sorceress. It premiered on 11 March 1989 at the Théâtre Varia in Brussels. It has since been performed many times and was filmed in 1995 by Canadian director Barbara Willis Sweete, with Morris in the roles of Dido and the Sorceress. The production was subsequently seen at the Grand Théâtre in Luxembourg, Opéra national de Montpellier, and Sadler's Wells Theatre in London. In both the Morris and the Waltz adaptations, the characters are each portrayed by both a singer and a dancer, with the dancers onstage and the singers performing from the side of the stage or the orchestra pit. Roles Synopsis Act 1 Dido's court The opera opens with Dido in her court with her attendants. Belinda is trying to cheer up Dido, but Dido is full of sorrow, saying 'Peace and I are strangers grown'. Belinda believes the source of this grief to be the Trojan Aeneas, and suggests that Carthage's troubles could be resolved by a marriage between the two. Dido and Belinda talk for a time: Dido fears that her love will make her a weak monarch, but Belinda and the Second Woman reassure her that "The hero loves as well." Aeneas enters the court, and is at first received coldly by Dido, but she eventually accepts his proposal of marriage. Act 2 Scene 1: The cave of the Sorceress The Sorceress/Sorcerer is plotting the destruction of Carthage and its queen, and summons companions to help with evil plans. The plan is to send her "trusted elf" disguised as Mercury, someone to whom Aeneas will surely listen, to tempt him to leave Dido and sail to Italy. This would leave Dido heartbroken, and she would surely die. The chorus join in with terrible laughter, and the Enchantresses decide to conjure up a storm to make Dido and her train leave the grove and return to the palace. When the spell is prepared, the witches vanish in a thunderclap. Scene 2: A grove during the middle of a hunt Dido and Aeneas are accompanied by their train. They stop at the grove to take in its beauty. A lot of action is going on, with attendants carrying goods from the hunt and a picnic possibly taking place, and Dido and Aeneas are together within the activity. This is all stopped when Dido hears distant thunder, prompting Belinda to tell the servants to prepare for a return to shelter as soon as possible. As every other character leaves the stage, Aeneas is stopped by the Sorceress's elf, who is disguised as Mercury. This pretend Mercury brings the command of Jove that Aeneas is to wait no longer in beginning his task of creating a new Troy on Latin soil. Aeneas consents to the wishes of what he believes are the gods, but is heart-broken that he will have to leave Dido. He then goes off-stage to prepare for his departure from Carthage. Act 3 The harbour at Carthage Preparations are being made for the departure of the Trojan fleet. The sailors sing a song, which is followed shortly by the Sorceress and her companions' sudden appearance. The group is pleased at how well their plan has worked, and the Sorceress sings a solo describing her further plans for the destruction of Aeneas "on the ocean". All the characters begin to clear the stage after a dance in three sections, and then disperse. The palace Dido and Belinda enter, shocked at Aeneas’ disappearance. Dido is distraught and Belinda comforts her. Suddenly Aeneas returns, but Dido is full of fear before Aeneas speaks, and his words only serve to confirm her suspicions. She derides his reasons for leaving, and even when Aeneas says he will defy the gods and not leave Carthage, Dido rejects him for having once thought of leaving her. After Dido forces Aeneas to leave, she states that "Death must come when he is gone." The opera and Dido's life both slowly come to a conclusion, as the Queen of Carthage sings her last aria, "When I am laid in Earth", also known as "Dido's Lament." The chorus and orchestra then conclude the opera once Dido is dead by ordering the "cupids to scatter roses on her tomb, soft and gentle as her heart. Keep here your watch, and never, never part." Recordings The first complete recording of the opera was made by Decca Records in 1935 with Nancy Evans as Dido and Roy Henderson as Aeneas, followed in 1945 by HMV's release with Joan Hammond and Dennis Noble. Kirsten Flagstad, who had sung the role at the Mermaid Theatre in London, recorded it in 1951 for EMI with Thomas Hemsley as Aeneas. Dido and Aeneas has been recorded many times since the 1960s with Dido sung by mezzo-sopranos such as Janet Baker (1961), Tatiana Troyanos (1968), Teresa Berganza (1986), Anne Sofie von Otter (1989) and Susan Graham (2003). In addition to Joan Hammond and Kirsten Flagstad, sopranos who have recorded the role include Victoria de los Ángeles (1965), Emma Kirkby (1981), Jessye Norman (1986), Catherine Bott (1992), Lynne Dawson (1998), and Evelyn Tubb (2004). Beginning with two pioneering recordings of the work with original instruments: Joel Cohen's 1979 recording with the Boston Camerata, on Harmonia Mundi, and Andrew Parrott's 1981 recording for Chandos with the Taverner Consort and Players, there was an increasing preference for a more genuine period sound. Further recordings by conductors and ensembles using this approach include those by Christopher Hogwood and the Academy of Ancient Music, William Christie and Les Arts Florissants (1986); Trevor Pinnock and The English Concert (1989); René Jacobs and the Orchestra of the Age of Enlightenment (1998); Emmanuelle Haïm and Le Concert d'Astrée (2003); and Predrag Gosta and New Trinity Baroque (2004). The Haïm recording with Susan Graham as Dido and Ian Bostridge as Aeneas was nominated for the Best Opera Recording in the 2005 Grammy Awards. Another notable recording dates from 1994, with The Scholars Baroque Ensemble and Kym Amps as Dido. Several performances of the opera have been filmed and are available on DVD, most recently the 2008 performance at the Opéra-Comique in Paris conducted by William Christie and directed by Deborah Warner (FRA Musica FRA001) and the 2009 performance at London's Royal Opera House conducted by Christopher Hogwood and directed by Wayne McGregor (OpusArte OA1018D). The Mark Morris dance version of the opera is also preserved on DVD (recorded 1995, Image Entertainment 8741) as is the dance version by Sasha Waltz (recorded 2005, Arthaus Musik 101311). Leopold Stokowski made a string orchestra arrangement of "Dido's Lament". Others who have recorded it include Matthias Bamert, José Serebrier, Richard Egarr and Iona Brown. See also List of compositions by Henry Purcell References Notes Sources Boyden, Matthew et al., "Dido and Aeneas", The rough guide to opera 3rd edition, Rough Guides, 2002. For a more detailed discussion of "early instrument Dido recordings, consult https://www.nytimes.com/1999/05/30/arts/music-raising-the-stakes-in-a-purcell-opera.html Darrell, R.D., The Gramophone Shop Encyclopedia of Recorded Music, The Gramophone Shop, Inc., 1936 Harris, Ellen T., Henry Purcell's Dido and Aeneas, Oxford University Press, 1990. Holman, Peter, Henry Purcell (Oxford, 1995). Holst, Imogen, "Purcell's librettist, Nahum Tate" Henry Purcell 1659–1695 Essays On His Music, Imogen Holst (ed.), Oxford University Press, 1959, pp. 35–41 Jordy, Catherine, "Flamboyante tragédie au Comique", Forum Opéra, 9 December 2009 (accessed 19 January 2010, in French) Keates, Jonathan, Purcell: A biography, Northeastern University Press, 1996. Kobbé, Gustav, The Definitive Kobbé's Book of Opera. The Earl of Harewood (ed.), 1st American ed., G.P. Putnam's and Sons, 1987, pp. 1010–1014. Mark Morris Dance Group, Work details: Dido and Aeneas Price, Curtis, "Dido and Aeneas", Grove Music Online, ed. L. Macy (accessed 31 December 2005), grovemusic.com (subscription access) Price, Curtis, Henry Purcell and the London stage, Cambridge University Press, 1984. Price, Curtis and Irena Cholij, ‘Dido's Bass Sorceress’, The Musical Times, Vol. 127 (Nov. 1986), 615–618 Pinnock, Andrew, ‘Deus ex machina: A royal witness to the court origin of Purcell’s Dido and Aeneas.’ Early Music, 40 (2012): 265–278. Pinnock, Andrew, ‘Which Genial Day? More on the court origin of Purcell’s Dido and Aeneas, with a shortlist of dates for its possible performance before King Charles II’, Early Music 43 (2015), 199–212 Purcell, Henry, Dido and Aeneas (vocal score), Edward Dent and Ellen Harris (eds.), Oxford University Press, Oxford, 1991. Purcell, Henry, Dido and Aeneas (vocal and full score), Margaret Laurie and Thurston Dart (eds.), Novello, 1971 Walker, Susan, "Every inch a diva. Opposites attract dancer Mark Morris. They define his life and his art", Toronto Star, 9 July 1995, p. C1 White, Bryan, 'Letter from Aleppo: dating the Chelsea School performance of Dido and Aeneas', Early Music 37 (2009), 417–428. https://doi.org/10.1093/em/cap041 White, Eric Walter, "New Light on Dido and Aeneas" Henry Purcell 1659–1695 Essays On His Music, Imogen Holst (ed.), Oxford University Press, 1959, pp. 14–34 Wood, Bruce and Andrew Pinnock, Unscared by turning times'? The dating of Purcell's Dido and Aeneas," Early Music'' 20 (1992), 372–90 External links Analysis in French Libretto at Stanford University Free excerpts from a recording by the New Trinity Baroque orchestra Internet Movie Database listing BBC Radio 4 programme about "Dido's Lament" The Purcell Society Public domain recording, 1951, (mp3 files) on www.liberliber.it. Free Online opera guide on Purcell's Dido and Aeneas An opera portrait with synopsis, commentary, music analysis, anecdotes Operas by Henry Purcell English-language operas Operas 1688 operas Operas based on classical mythology Operas based on the Aeneid Operas set in Africa Music based on poems Cultural depictions of Dido Phoenicia in fiction
1233907
https://en.wikipedia.org/wiki/Udev
Udev
udev (userspace /dev) is a device manager for the Linux kernel. As the successor of devfsd and hotplug, udev primarily manages device nodes in the directory. At the same time, udev also handles all user space events raised when hardware devices are added into the system or removed from it, including firmware loading as required by certain devices. Rationale It is an operating system's kernel that is responsible for providing an abstract interface of the hardware to the rest of the software. Being a monolithic kernel, the Linux kernel does exactly that, and device drivers are part of the Linux kernel, which make up more than half of its source code. Hardware can be accessed through system calls or over their device nodes. To be able to deal with peripheral devices that are hotplug-capable in a user-friendly way, a part of handling all of these hotplug-capable hardware devices was handed over from the kernel to a daemon running in user-space. Running in user space serves security and stability purposes. Design Device drivers are part of the Linux kernel, in which their primary functions include device discovery, detecting device state changes, and similar low-level hardware functions. After loading a device driver into memory from the kernel, detected events are sent out to the userspace daemon udevd. It is the device manager, , that catches all of these events and then decides what shall happen next. For this, has a very comprehensive set of configuration files, which can all be adjusted by the computer administrator, according to their needs. In case a new storage device is connected over USB, is notified by the kernel and itself notifies the udisksd-daemon. That daemon could then mount the file systems. In case a new Ethernet cable is plugged into the Ethernet NIC, is notified by the kernel and itself notifies the NetworkManager-daemon. The NetworkManager-daemon could start dhclient for that NIC, or configure according to some manual configuration. The complexity of doing so forces application authors to re-implement hardware support logic. Some hardware devices also require privileged helper programs to prepare them for use. These must often be invoked in ways that can be awkward to express with the Unix permissions model (for example, allowing users to join wireless networks only if they are logged into the video console). Application authors resort to using setuid binaries or run service daemons to provide their own access control and privilege separation, potentially introducing security holes each time. HAL was created to deal with this, but is now deprecated in most Linux distributions. Overview Unlike traditional Unix systems, where the device nodes in the directory have been a static set of files, the Linux udev device manager dynamically provides only the nodes for the devices actually present on a system. Although devfs used to provide similar functionality, Greg Kroah-Hartman cited a number of reasons for preferring udev over devfs: udev supports persistent device naming, which does not depend on, for example, the order in which the devices are plugged into the system. The default udev setup provides persistent names for storage devices. Any hard disk is recognized by its unique filesystem id, the name of the disk and the physical location on the hardware it is connected to. udev executes entirely in user space, as opposed to devfs's kernel space. One consequence is that udev moved the naming policy out of the kernel and can run arbitrary programs to compose a name for the device from the device's properties, before the node is created; there, the whole process is also interruptible and it runs with a lower priority. The udev, as a whole, is divided into three parts: Library libudev that allows access to device information; it was incorporated into the systemd 183 software bundle. User space daemon that manages the virtual . Administrative command-line utility for diagnostics. The system gets calls from the kernel via netlink socket. Earlier versions used hotplug, adding a link to themselves in with this purpose. Operation udev is a generic device manager running as a daemon on a Linux system and listening (via a netlink socket) to uevents the kernel sends out if a new device is initialized or a device is removed from the system. The udev package comes with an extensive set of rules that match against exported values of the event and properties of the discovered device. A matching rule will possibly name and create a device node and run configured programs to set up and configure the device. udev rules can match on properties like the kernel subsystem, the kernel device name, the physical location of the device, or properties like the device's serial number. Rules can also request information from external programs to name a device or specify a custom name that will always be the same, regardless of the order devices are discovered by the system. In the past a common way to use udev on Linux systems was to let it send events through a socket to HAL, which would perform further device-specific actions. For example, HAL would notify other software running on the system that the new hardware had arrived by issuing a broadcast message on the D-Bus IPC system to all interested processes. In this way, desktops such as GNOME or K Desktop Environment 3 could start the file browser to browse the file systems of newly attached USB flash drives and SD cards. By the middle of 2011 HAL had been deprecated by most Linux distributions as well as by the KDE, GNOME and Xfce desktop environments, among others. The functionality previously embodied in HAL has been integrated into udev itself, or moved to separate software such as udisks and upower. udev provides low-level access to the linux device tree. Allows programs to enumerate devices and their properties and get notifications when devices come and go. dbus is a framework to allow programs to communicate with each other, securely, reliably, and with a high-level object-oriented programming interface. udisks (formerly known as DeviceKit-disks) is a daemon that sits on top of libudev and other kernel interfaces and provides a high-level interface to storage devices and is accessible via dbus to applications. upower (formerly known as DeviceKit-power) is a daemon that sits on top of libudev and other kernel interfaces and provides a high-level interface to power management and is accessible via dbus to applications. NetworkManager is a daemon that sits on top of libudev and other kernel interfaces (and a couple of other daemons) and provides a high-level interface to network configuration and setup and is accessible via dbus to apps. udev receives messages from the kernel, and passes them on to subsystem daemons such as Network Manager. Applications talk to Network Manager over D-Bus. Kernel -> udev -> Network Manager <=> D-Bus <=> Firefox HAL is obsolete and only used by legacy code. Ubuntu 10.04 shipped without HAL. Initially a new daemon DeviceKit was planned to replace certain aspects of HAL, but in March 2009, DeviceKit was deprecated in favor of adding the same code to udev as a package: udev-extras, and some functions have now moved to udev proper. History udev was introduced in Linux 2.5. The Linux kernel version 2.6.13 introduced or updated a new version of the uevent interface. A system using a new version of udev will not boot with kernels older than 2.6.13 unless udev is disabled and a traditional directory is used for device access. In April 2012, udev's codebase was merged into the systemd source tree, making systemd 183 the first version to include udev. In October 2012, Linus Torvalds criticized Kay Sievers's approach to udev maintenance and bug fixing related to firmware loading, stating: In 2012, the Gentoo Linux project created a fork of systemd's udev codebase in order to avoid dependency on the systemd architecture. The resulting fork is called eudev and it makes udev functionality available without systemd. A stated goal of the project is to keep eudev independent of any Linux distribution or init system. The Gentoo project describes eudev as follows: On May 29, 2014, support for firmware loading through udev was dropped from systemd, as it has been decided that it is the kernel's task to load firmware. Two days later, Lennart Poettering suggested this patch be postponed until kdbus starts to be utilized by udev; at that point, the plan was to switch udev to use kdbus as the underlying messaging system, and to get rid of the userspace-to-userspace netlink-based transport. Authors udev was developed by Greg Kroah-Hartman and Kay Sievers, with much help from Dan Stekloff, among others. References External links Computer configuration Interfaces of the Linux kernel Linux file system-related software Linux kernel features Unix file system-related software
3708591
https://en.wikipedia.org/wiki/PCSX2
PCSX2
PCSX2 is a free and open-source PlayStation 2 emulator for Windows, Linux, and macOS that supports a wide range of PlayStation 2 video games with a high level of compatibility and functionality. Although PCSX2 can closely mirror the original gameplay experience on the PlayStation 2, PCSX2 supports a number of improvements over gameplay on a traditional PlayStation 2, such as the ability to use custom resolutions up to 8192×8192, anti-aliasing, and texture filtering. Background PCSX2, like its predecessor project PCSX (a PlayStation emulator), is based on a PSEmu Pro spec plug-in architecture, separating several functions from the core emulator. These are the graphics, audio, input controls, CD/DVD drive, and USB and FireWire (i.LINK) ports. Different plug-ins may produce different results in both compatibility and performance. Additionally, PCSX2 requires a genuine copy of the PS2 BIOS, which is not available for download from the developers due to copyright-related legal issues. Since September 2016, PCSX2 is partially compatible with PlayStation games. The main bottleneck in PS2 emulation is emulating the Emotion Engine multi-processor on the PC x86 architecture. Although each processor can be emulated well independently, accurately synchronizing them and emulating the console's timing is difficult. Development Development of PCSX2 was started in 2001 by programmers who go by the names Linuzappz and Shadow, who were programmers for the PlayStation emulator PCSX-Reloaded. Other programmers later joined the team, and they were eventually able to get some PS2 games to the loading screen. The team then started working on the difficult task of emulating the PlayStation 2's BIOS; they got it to run, although it was slow and graphically distorted. Version 0.9.1 was released in July 2006. From 2007 to 2011, developers worked on Netplay and speed improvements. PCSX2 0.9.8 was released in May 2011 and featured an overhauled GUI written with wxWidgets that improved compatibility for Linux and newer Windows operating systems, the addition of a new VU recompiler that brought better compatibility, a memory card editor, an overhaul of the SPU2-X audio plug-in, and numerous other improvements. As of November 2021, 97% of PlayStation 2 games are considered "playable" on the emulator, meaning they can be played largely but not entirely free of slight issues. 22 games are considered "perfect" – free of bugs – and all but one at least boot to the menu screen. In 2022, PCSX2 added support for the Vulkan API. PCSX2 was used as a basis for AetherSX2, a PlayStation 2 emulator for Android. Features PCSX2 supports save states and dynamic recompilation (JIT). There is also support for gameplay recording in Full HD using the GSdx plugin. Options such as the ability to increase/decrease game speeds, use unlimited memory cards, and utilize any gamepad controllers supported by the native operating system are also available. Cheat codes are supported via the use of PNACH patching files. Plug-ins PCSX2 makes use of plug-ins as a means of modularizing development efforts among the separate components (subsystems) of the emulated PlayStation 2 hardware. For instance, video plug-ins are utilized by PCSX2 to render images to the screen and emulate the graphics hardware of the PlayStation 2, whereas sound plug-ins emulate the sound hardware of the PlayStation 2. Not only does this allow different developers to focus their efforts on one aspect of the PlayStation 2 hardware, this also allows users that have a system configuration that does not yield good results with one plug-in to attempt to try another to see if they have better results. A list of several plug-ins follows: As of v1.7.0-dev-1420 PCSX2 has merged all functionality into the core emulator, eliminating the concept of plugins. The list of plugins that have been merged are listed below: GS: GSDX SPU: SPU2 PAD: Lillypad (Windows) | Onepad (Linux) DEV9: Dev9Gigaherz CDVD: CDVDGigaherz Graphical Improvements (GSdx Plugin) The GSdx plug-in offers a variety of image quality improvements over the original PlayStation 2 hardware, such as: Option to increase internal resolution (only in hardware mode) Anti-aliasing: FXAA, MSAA (only in the hardware mode and was later removed during 1.5 development) and Edge-AA (only in software mode) Post-Processing Pixel Shaders Mipmapping (required for correct functioning of some games, like the Jak and Daxter series) Bilinear filtering and Trilinear filtering (OpenGL only) Anisotropic filtering Widescreen hacks Hardware requirements Hardware requirements are largely game-dependent. Due to the demanding nature of emulation, PCSX2 is much more likely to perform well with modern mid-range to high-end hardware, with lower-end systems likely to experience less than full performance. The performance bottleneck in most cases is the CPU rather than the GPU. This is especially the case in software mode, in which only the CPU is used for emulation. In hardware mode, the GPU emulates the graphics, but can still be a bottleneck if the internal resolution is set too high. Some games may also run slower due to unoptimized graphics code or weak video cards. As computer hardware has continued to advance with time, the likelihood of performance issues with PCSX2 has experienced a corresponding decrease. Reception PCSX2 has been very well received. Matthew Humphries of Geek.com described it as "an impressive piece of work". Alex Garnett of PC World criticized the difficulty of setting up PCSX2 but called it a "masterpiece." Although David Hayward of Micro Mart also criticized the complexity, he also called it "technically amazing." Sriram Gurunathan of In.com described PCSX2 as "arguably the most popular emulator around" and named it as one of the site's top five emulators. Brandon Widder of Digital Trends included PCSX2 in his Best Emulators article. John Corpuz of Tom's Guide mentioned PCSX2 in his Best PlayStation Emulators for PCs article, saying, "When it comes to stable, playable Playstation 2 emulation, PCSX2 is pretty much the best game in town at the moment." See also Dolphin, a Nintendo GameCube and Wii emulator List of video game emulators Mednafen References External links Free emulation software Free software programmed in C++ Linux emulation software MacOS emulation software PlayStation 2 emulators Software that uses wxWidgets Software using the GPL license Windows emulation software
23126423
https://en.wikipedia.org/wiki/3Com%20Products
3Com Products
On 12 April 2010, Hewlett-Packard completed a previously announced acquisition of 3Com. Following the HP acquisition, 3Com was fully absorbed by HP and no longer exists as a separate entity. The article below explains the portfolio at the time of acquisition. The corporation had three global brands—H3C, 3Com, and TippingPoint. Products sold under the 3Com brand include network switches, wireless access points, WAN routers, and IP voice systems. Switches Modular Switches Switch 8800 - 7-, 10- and 14-slot configurations with optional dual load-sharing fabric modules. Of total slots, two reserved for fabrics. Backplane capacity 1.44 Terabit-per-second. Up to 48 10GE ports or 576 Gigabit ports, with optional PoE. Switch 7900E - 4-, 5-, 8, and 12-slot configurations with optional dual fabric modules. One model with 8 vertical slots. Of total slots, two reserved for fabrics. Backplane capacity 2.88 Terabit-per-second. Up to 24 10GE ports or 480 Gigabit ports, with optional PoE. Switch 7550 - 4-, 7- and 8-slot configurations. 4- and 7- have a single fabric module, while 8-slot can have dual redundant fabric modules. Up to 292 Gigabit ports. Fully Managed Fixed Switches These switches support comprehensive Command Line Interface, Telnet, web interface, SNMP management, and interoperable single IP management clustering (basic stacking). Warranty for these is 3Com Limited Lifetime (up to five years after product is discontinued) with Next Business Day Advanced Hardware Replacement. Covers unit, power supply and fan. 90-day phone support. Lifetime access to software updates. Switch 5500G - Gigabit. Stackable, 24 or 48 Gb ports including 4 Dual Personality Ports (2 x Gb or SFPs). Additionally includes two dedicated 24Gbit/s stacking ports and one module expansion slot. Available modules: 2-Port 10GE XFP; 1-Port 10GE XENPAK; 8-Port Gb SFP. Layer 2 switching and Layer 3 static routing, RIP, and multicast routing (OSPF, PIM). Stackable up to 8-units high using 3Com XRN Technology, with distributed link aggregation, resilient stacking up to 96Gbit/s bandwidth, distributed routing tables, single IP address management. Two PoE models and upgradeable PoE. Single IP management clustering (basic stacking) also supported. Switch 4800G - Gigabit. Stackable, 24 or 48 Gb ports including 4 Dual Personality Ports (2 x Gb or SFPs). Additionally includes two module expansion slots. Available modules: 2-Port 10GE XFP; 1-Port 10GE XFP; 2-Port Local Connection; 2-Port Gb SFP. Two PoE models. Layer 2 switching and Layer 3 static routing, RIP, and multicast routing (OSPF, PIM). Stackable up to 9-units high using 3Com XRN Technology, with distributed link aggregation, resilient stacking up to 96Gbit/s bandwidth, single IP address management. Single IP management clustering (basic stacking) also supported. Switch 4500G - Gigabit. 24 or 48 Gb ports including 4 Dual Personality Ports (2 x Gb or SFPs). Additionally includes two module expansion slots. Available modules: 2-Port 10GE XFP; 1-Port 10GE XFP; 2-Port Local Connection. Two PoE models. Supports Layer 2 switching and Layer 3 static routing, RIP. Single IP management clustering basic stacking supported. Switch 4200G - Gigabit. 12, 24 or 48 Gb ports including 4 Dual Personality Ports (2 x Gb or SFPs). 12-Port module has one 10-Gigabit expansion slot; other units have two 10-Gigabit expansion slots. One PoE model. Units can directly take a 10GE XENPAK, or can receive a 1-Port 10GE XFP module. Supports Layer 2 switching and Layer 3 static routing. Single IP management clustering basic stacking supported. Switch 5500 - Fast Ethernet. 28- and 52-port models. Two PoE models. Stackable, 24 or 48 10/100 ports plus four SFP Gigabit Ports. One model 24 100Mb SFPs plus 2 x Gb and 2 x SFP Gigabit Ports. Supports Layer 2 switching and Layer 3 static routing, RIP, and multicast routing (OSPF, PIM). Stackable up to 8-units high using Gb ports and 3Com XRN Technology, with distributed link aggregation, resilient stacking, distributed routing tables, single IP address management. Single IP management clustering (basic stacking) also supported. Switch 4500 - Fast Ethernet. 26- and 50-port models. Two PoE models. Stackable, 24 or 48 10/100 ports plus 4 Dual Personality Ports (2 x Gb or SFPs). Supports Layer 2 switching and Layer 3 static routing, and RIP. Stackable up to 8-units high using Gigabit ports with distributed link aggregation, single IP address management. Single IP management clustering (basic stacking) also supported. Switch 4210 - Fast Ethernet. 9-, 18-, 26-, and 52-port models. Three PoE models. 10/100 ports plus one (9-port model) or two (all others) Dual Personality Ports (Gb or SFPs). Supports Layer 2 switching only. Single IP management clustering basic stacking supported. Web Managed Switches These "smart managed" switches support a web interface for changing unit configuration. They have a limited Command Line Interface and do not support remote access via Telnet. Warranty for these is 3Com Limited Lifetime (up to five years after product is discontinued) with Next Business Day Advanced Hardware Replacement. Covers unit, power supply and fan. 90 day phone support. Lifetime access to software updates. Baseline Plus 2900 - Gigabit. 16, 24 or 48 Gb ports including 4 Dual Personality Ports (2 x Gb or SFPs). One PoE model. Supports Layer 2 switching only. Baseline Plus 2200/2400 - Fast Ethernet. 26- and 50-port models. One PoE model. Supports Layer 2 switching only. Unmanaged Switches These switches are plug-and-play with no interface for modifying or adjusting the configuration. Baseline 2800 - Gigabit. 16 or 24 Gb ports. Supports Layer 2 switching only. Baseline 2000/2100 - Fast Ethernet. 16-, 24- and 26-port models. The 26-port model has two Gb ports. Supports Layer 2 switching only. Discontinued Products Switch 4400 - Fast Ethernet Switch 4200 - Fast Ethernet Switch 3870 - Gigabit Ethernet Switch 3800 - Gigabit Ethernet Switch 3300 - Fast Ethernet Ergo - 3Com Audrey References External links 3Com Corporati on Website Lists of computer hardware
51778096
https://en.wikipedia.org/wiki/Hackers%20%28video%20game%29
Hackers (video game)
Hackers is an independent strategy game developed by Trickster Arts for Android and iOS. Hackers is a cyberwarfare strategy game, and has been likened to the video game Uplink. On iOS the game has an added suffix: Hackers - Join the Cyberwar. Players take on the role of a Hacker during a fictional First World Cyberwar. Players can develop and secure their own virtual 3D network and hack various targets around the world. Players can research programs, hack networks to increase their reputation and resources, and compete against other countries. The overarching strategy is to build a robust enough network to protect your data, while finding the best tools to use against others. The game features 70+ story missions with multiple ways to defeat them. There are also player-created networks to be infiltrated, which raise the position of your country on the global leaderboard. The game also has a developed ARG, starting from an in-game mission. Story Hackers takes place in an alternate reality, where a piece of software (codenamed Ergo) was designed to make hacking into systems easier. The software has been made available to the public, so that anyone and everyone can get into hacking. But unlike real-world hacking which consists of command line usage and programming, this alternate reality puts systems and the hacking process into a GUI (Graphical User Interface) to make things easier to try and hack other people. The game starts you out performing simple hacks with a recently met acquaintance called "Marty". As the game progresses you are introduced to other characters representing various factions such as StormCorp, MIRAGE, Cyber Alliance, Eastern Coalition and DejaVu. During the course of the story you get to experience various security, activist and terrorist missions that test your skills as a hacker. Development The game was developed in-house by Trickster Arts. The game was developed in Unity Engine. Development was started in Summer 2014. While working on Hero of Many the developers started working in parallel on another project. This project was cancelled for design reasons and Hackers was created from old gaming design notes out of the desire to keep the company going. Hackers was announced at Game Access '16 on 16 April 2016. The music was composed by Matúš Široký. The Hackers: Original Soundtrack was released via Bandcamp on October 5, 2017. Sounds were produced by Marek Horváth. Reception As of 27 September 2016, Hackers is featured globally by Apple on the front page of "New Games We Love" and, as of 13 November 2016, by Google Play in the "New Games" section in 23 countries. On 27 January 2017, Hackers was nominated for "Czech game of the year for Mobile Devices" at the annual Czech Game of the Year Awards. On 26 April 2017, Hackers was nominated for "Mobile Application of the Year 2017" at the annual Mobilní Aplikace Roku awards. As of 6 October 2017, Hackers has received 4 million downloads. References External links Official site Trickster Arts's website Personal site of Matouš Ježek Strategy video games Simulation video games Indie video games Fantasy video games Free-to-play video games Video games developed in the Czech Republic Android (operating system) games IOS games 2016 video games Multiplayer and single-player video games
2058080
https://en.wikipedia.org/wiki/MIDI%20keyboard
MIDI keyboard
A MIDI keyboard or controller keyboard is typically a piano-style electronic musical keyboard, often with other buttons, wheels and sliders, used for sending MIDI signals or commands over a USB or MIDI 5-pin cable to other musical devices or computers. MIDI keyboards lacking an onboard sound module cannot produce sounds themselves, however some models of MIDI keyboards contain both a MIDI controller and sound module, allowing them to operate independently. When used as a MIDI controller, MIDI information on keys or buttons the performer has pressed is sent to a receiving device capable of creating sound through modeling synthesis, sample playback, or an analog hardware instrument. The receiving device could be: a computer running a digital audio workstation (DAW) or a standalone VST/AU instrument (alternatively, the computer could be used to re-route the MIDI signal to other devices) a sound module a digital (digital piano/stage piano) or analogue (synthesizer) hardware instrument with MIDI capability, such as a drum machine While many digital and analog hardware keyboards in the aforementioned categories of digital piano, stage piano, and synthesizer can be used as MIDI controllers if they have MIDI capability, they often do not offer the same level of software integration and number of MIDI-mappable controls as a dedicated MIDI keyboard. MIDI keyboards are often utilized by individuals who work with DAWs and software instruments, from hobbyists to professional musicians working in recording studios or concert stages. Signal flow: MIDI keyboard to audio Below is an example of possible signal chains for a MIDI keyboard setup, with the goal of producing audio: MIDI Keyboard → 5-pin MIDI connector OR USB cable (will need a “B” connector, so "USB A to B" or "USB C to B", depending on computer) → computer running a DAW or a standalone VST/AU instrument OR a sound module OR a digital piano, stage piano, or synthesizer with MIDI capability → audio sound device (amplifier and speakers or headphones) Class compliance and power source When using a MIDI keyboard with a computer, class compliance must be taken into consideration. Class compliant means, essentially, "plug and play": upon being plugged in (USB or 5-pin) and powered up, MIDI keyboards that are class compliant should be recognized by any computer. MIDI keyboards and MIDI-capable hardware keyboards that are not class compliant require a keyboard-specific software driver to be installed on the computer in order for the keyboard to be recognized. While most MIDI keyboards produced in the 2010s are bus-powered, meaning their electrical power is supplied through the same USB connection that transfers MIDI data to the computer, some keyboards have the option of, or even require, using external power to operate. If using a traditional 5-pin MIDI connector instead of USB, the MIDI keyboard will likely require external power, as 5-pin MIDI connections cannot send the current needed to power a keyboard. If using a MIDI-capable hardware keyboard as a controller, one will also likely need external power, as most 2010s hardware keyboards rely on external power to function. Keyboard action The action of a keyboard is the internal mechanism by which the keys work in order to move and produce sound, or, in this case, MIDI data. Two major types of keyboard actions exist: those derived from traditional, European, key-based instruments and non-traditional, contemporary designs that allow for expanded playing possibilities. Traditional MIDI controllers in this category have keys meant to resemble those of a grand piano, pipe organ, or synthesizer. Each of these action types is designed differently from the next, which, in turn, gives the action a particular "feel" to the player and lends it to an ideal usage. Synth action: un-weighted, often spring-loaded— ideal for music that calls for playing quickly, and may be favored by those with no piano or keyboard experience due to less required effort to depress the keys. Synth action is generally less expensive and weighs less than its weighted key counterpart. Semi-weighted action: similar to synth action, but with slightly increased resistance— also ideal for playing quickly, but allows for more dynamic playing on velocity-sensitive sound patches due to increased feedback from the keys Waterfall keys: semi-weighted keys with smoothed and rounded edges— meant for emulating organ keys (i.e. Hammond B-3 organ) and ideal for playing organ-specific techniques (i.e. palm glissando/smears) Hammer action: piano-style action with fully weighted keys— meant to provide resistance similar to an acoustic piano action and ideal for playing piano or electric piano instrument patches Graded hammer action: hammer action with graded resistance— keys in the upper register are lighter than keys in the lower register, and resistance increases incrementally as one descends the keyboard from high to low, like an acoustic piano action (e.g. grand piano) Graded hammer action with wood: graded hammer action with wooden key cores— the closest in realism to an acoustic piano action (as acoustic piano keys are made of wood) Many examples of the above actions, other than the waterfall keys, will include a small lip that protrudes from the top of the distal end of the white keys. This is emulating a customary design detail found on acoustic piano keys. Keyboards with any type of hammer action are most likely to display this lip. Sensor type When a Musician presses a key three things can happen: 1- How fast the key is pressed, is called 'Attack Velocity', 2 - After the key is pressed, it can be held down and pressed harder or softer, this is called 'Key Pressure' or sometimes 'Channel Pressure' if it affects all the keys on the keyboard, and 3 - how fast the key is released, is called 'Release Velocity'. Inexpensive keyboards have no velocity reporting. Quality keyboards have Attack Velocity. Industrial and Professional Keyboards have both Attack Velocity and Key Pressure, and only a handful of very exotic and expensive keyboards have release velocity. Unfortunately, every manufacturer refers to these three key abilities by different names. Some manufactures refer to 'Key Pressure' as 'After Touch'. Most of these traditional key keyboards determine the attack velocity, sustain, and release of a note based on a calculation made between two sensors in each key. Some high-end keyboards now feature triple sensors, claiming improved accuracy in the tracking of key movement, which could translate into a more detailed, and perhaps more expressive, performance. Aftertouch Some MIDI keyboards are capable of sending aftertouch data, which can be assigned to a variety of effects, including: vibrato, pitch bends, and volume swells. Aftertouch data is generated when a key is depressed further into the keybed after its initial depression (without releasing the key). Keyboards can be equipped with channel or polyphonic aftertouch. The former sends only one aftertouch message, regardless of which key is depressed; the latter sends individual aftertouch messages for each key. Keyboards with every key aftertouch can enable the performer to create aftertouch effects on particular notes, such as emphasizing a melody note by continuing to press it. Non-traditional Not all MIDI keyboards utilize variations on the traditional piano-style action. One example of a MIDI keyboard with a non-traditional action is the Continuum Fingerboard, which is based on a "fretless" type keyboard interface, enabling portamento style note changes at will during play. Another unconventional MIDI keyboard is the Tonal Plexus keyboard, which provides for up to 1266 different pitches possible through the TPX6 1266 Keys (Microtonal MIDI Controller). The ROLI Seaboard line of MIDI keyboards has soft, squishy keys, that have a foam rubber-like texture. All the above-mentioned MIDI keyboards take the concept of aftertouch to new heights: for instance, the Roli Seaboards can sense left-to-right, front-to-back, downward pressure/depth, and the rate of change in each of these parameters. All that information can then be used to control the behavior of a digital instrument. Size and number of keys MIDI keyboards come in a wide range of sizes and number of keys, each with their own benefits and drawbacks. Generally speaking, some sizes are more common while others are less common, both in online stores and in bricks and mortar music stores. Common 25-key: generally the smallest MIDI keyboard available— portable and light, but only suitable for playing one-handed piano parts such as basslines. 49-key: considered the smallest MIDI keyboard size upon which to play two-handed piano parts effectively. 61-key: ideal size for playing two-handed piano parts— same key count as a single manual from a classic organ like the Hammond B-3. 88-key: full key range— same key count as an acoustic piano— the "gold standard" when playing piano and electric piano, but least portable and heaviest of the MIDI keyboards. Uncommon 32 & 37-key: compared to 49-key keyboards, these are more compact and easy to carry. Compared with a 25-key keyboard, they offer a greater range 64-key: Roland has a 64 keys hammer-action keyboard (RD64 ) which is more compact than regular 88-keys 73 & 76-key: compared to 88-key keyboards, these are more compact and easy to carry. Compared with a 61-key keyboard, they offer a greater range modular some small manufacturers are making collapsible keyboards (Kombos, Piano de Voyage), made of sections of 1 to 2 octaves. These keyboards are focused on portability Most 25-key through 49-key keyboards come equipped with synth or semi-weighted actions. Most 49-key and 61-key keyboards come equipped with semi-weighted actions, but some may be found with hammer actions. Waterfall keys can be found occasionally on some 61-key and 73-key keyboards. Most 73-key through 88-key keyboards come equipped with hammer actions; however, some may be outfitted with semi-weighted actions. Key size MIDI keyboards are usually full-size keys, like a grand piano. Some smaller keyboards use minikeys, which are smaller. Some tiny keyboards have flat minikeys which are even smaller. Controls (buttons, knobs, faders, wheels) Many MIDI keyboards include control devices (other than the keys), which may allow for manipulation of DAW controls and/or the sound generator (either software or hardware). Provided they are mapped, or mappable, to the correct function, these allow the player to access a DAW or alter the sound of an instrument patch without taking hands off the keyboard. Buttons: may be assignable or allow for editing of menus, settings, MIDI CC information, splits, use of DAW transport controls, transpose function, or octave up/down Knobs: either potentiometers or endless encoders— typically assignable to a function Sliders/faders: typically assignable— common assignments are as drawbars for an organ, to control channel faders, or to control equalizer (EQ) Pitch bend and modulation: allows for control of pitch bending and modulation via one of several control setups... Traditional two wheels: spring-loaded pitch bend wheel and a modulation wheel X/Y joystick: capable of rotation in a full circle— pitch bend and modulation are mapped out on an X/Y plot Roland-style pitch/modulation: spring-loaded, three-directional lever— left-to-right for pitch bend and up for modulation Nord pitch stick & modulation: tensioned wooden stick that moves left-to-right for pitch bend— traditional modulation wheel (made of ceramic) Touch strips: touch-sensitive strips assignable to several functions— less common on most keyboards Pads: velocity-sensitive, rubberized pads— can be assigned to trigger events from drum hits to patch changes Breath controller: A device which involves a mouthpiece which is gripped by the teeth or held in front of the mouth with a holder; by blowing into it, the user can control synth tones. It can be used to give a vocal, singing quality to notes played on a keyboard Screens: depending on make and model, some keyboards have no touchscreen, such as with 1980s LCD screen— others have touch screens that intuitively interface with companion software Software integration: A MIDI keyboard offers an automatic configuration option to integrate with a DAW. Foot controllers MIDI keyboards often have the ability to accept foot controllers, of which there are four main types: piano pedals, expression pedals, stomp boxes, and organ-style foot pedal keyboards. Piano pedals: can be a single sustain pedal (two types) or up to a full grand piano-style triple pedal, with sostenuto and una corda pedals, like an acoustic piano Sustain pedals: Momentary sustain pedals only send a message when the pedal is on or off. Continuous sustain pedals send a real-time MIDI value of the sustain pedal's position, allowing for realistic tracking of the pedal's movement (provided the controller and sound source support continuous sustain). Expression pedals: send a continuous MIDI value that can be mapped to control parameters like volume (a "swell pedal"), effects (chorus, reverb, etc.), and patch changes MIDI stomp boxes: similar in form to a guitar pedal board, but are configurable to send MIDI commands for various actions (i.e. patch changes, looping, or playing chords) Organ foot pedal keyboards: traditional organ pedal keyboards for those used to playing Hammond B3's or church organs. While they are intended to send MIDI values for organ bass notes, if they are hooked up to a DAW, the pedals can trigger chords or play a high-pitched melody. The 12 Step foot controller can be programmed to play a wide range of pitches, from deep-pitched basslines to high-pitched melodies. As well, the 12 Step's individual Keys can be programmed to play chords with up to five notes. References External links MIDI Keyboard Limits, an interactive animation by Michael Schreiber, The Wolfram Demonstrations Project, 2007. Why MIDI matters (midi.org) Keyboard instruments MIDI controllers Electric and electronic keyboard instruments
3781117
https://en.wikipedia.org/wiki/PC-based%20IBM%20mainframe-compatible%20systems
PC-based IBM mainframe-compatible systems
Since the rise of the personal computer in the 1980s, IBM and other vendors have created PC-based IBM-compatible mainframes which are compatible with the larger IBM mainframe computers. For a period of time PC-based mainframe-compatible systems had a lower price and did not require as much electricity or floor space. However, they sacrificed performance and were not as dependable as mainframe-class hardware. These products have been popular with mainframe developers, in education and training settings, for very small companies with non-critical processing, and in certain disaster relief roles (such as field insurance adjustment systems for hurricane relief). Background Up until the mid-1990s, mainframes were very large machines that often occupied entire rooms. The rooms were often air conditioned and had special power arrangements to accommodate the three-phase electric power required by the machines. Modern mainframes are now physically comparatively small and require little or no special building arrangements. System/370 IBM had demonstrated use of a mainframe instruction set in their first desktop computer—the IBM 5100, released in 1975. This product used microcode to execute many of the System/370's processor instructions, so that it could run a slightly modified version of IBM's APL mainframe program interpreter. In 1980 rumors spread of a new IBM personal computer, perhaps a miniaturized version of the 370. In 1981 the IBM Personal Computer appeared, but it was not based on the System 370 architecture. However, IBM did use their new PC platform to create some exotic combinations with additional hardware that could execute S/370 instructions locally. Personal Computer XT/370 In October 1983, IBM announced the IBM Personal Computer XT/370. This was essentially a three-in-one product. It could run PC DOS locally, it could also act as 3270 terminal, and finally—its most important distinguishing feature relative to an IBM 3270 PC—was that it could execute S/370 instructions locally. The XT/370 was an IBM Personal Computer XT (System Unit 5160) with three custom 8-bit cards. The processor card (370PC-P), contained two modified Motorola 68000 chips (which could emulate most S/370 fixed-point instructions and non-floating-point instructions), and an Intel 8087 coprocessor modified to emulate the S/370 floating point instructions. The second card (370PC-M), which connected to the first with a unique card back connector contained 512 KiB of memory. The third card (PC3277-EM), was a 3270 terminal emulator required to download system software from the host mainframe. The XT/370 computer booted into DOS, then ran the VM/PC Control Program. The card's memory space added additional system memory, so the first 256 KiB (motherboard) memory could be used to move data to the 512 KiB expansion card. The expansion memory was dual ported, and provided an additional 384 KiB to the XT Machine bringing the total RAM on the XT side to 640 KiB. The memory arbitrator could bank switch the second 128 KiB bank on the card to other banks, allowing the XT Intel 8088 processor to address all the RAM on the 370PC-M card. Besides the 416 KB of usable RAM for S/370 applications, the XT/370 also supported up to 4 MB of virtual memory using the hard drive as its paging device. IBM claimed the XT/370 reached 0.1 MIPS (when the data fit in RAM). In 1984, the list price of an XT/370 in its typical configuration was approximately $12,000 so compared favorably with IBM's own mainframes on a $/MIPS basis; for example, an IBM 4341 delivered 1.2 MIPS for $500,000. While it theoretically reduced demand on customers' mainframes by offloading load onto the smaller computer, as customers purchased more XT/370s they likely increased the overall load on the mainframes, increasing IBM's mainframe sales. Similarly to the mainframe version of VM/CMS, the VM/PC also created the illusion of virtual disks, but on the PC version these were maintained as PC DOS files, either on floppy or hard disk. For example, the CMS virtual disk belonging to user FRED at device address 101 was stored as the DOS file FRED.101. The CMS IMPORT and EXPORT commands allowed extraction of files from these virtual drives as well as ASCII/EBCDIC conversion. The XT/370 came with an XT-style 83-key keyboard (10 function keys). Newer revisions of the XT/370 dropped the PC3277-EM in favor of the IBM 3278/79 boards. The XT/370 was among the XT systems that could use a second hard drive mounted in the 5161 expansion chassis. BYTE in 1984 called the XT/370 "a qualified success". The magazine praised IBM for "fitting all of the 370's features into the XT", and hoped for technical improvements that "might result in an even better computer". Personal Computer AT/370 In 1984, IBM introduced the IBM Personal Computer AT/370 with similar cards as for the XT/370 and updated software, supporting both larger hard disks and DMA transfers from the 3277 card to the AT/370 Processor card. The system was almost 60% faster than the XT/370. The AT/370 used different, 16-bit interface co-processing cards than the XT, called PC/370-P2 and PC/370-M2. The latter card still had only 512 KB for memory, out of which 480 KB were usable for programs in S/370 mode, while 32 KB were reserved for microcode storage. For the terminal emulation function, the AT/370 came with the same 3278/79 Emulation Adapter as the late-series XT/370. The AT/370 motherboard itself was equipped with 512 KB of RAM. The AT/370 also ran VM/PC, but with PC DOS 3.0 instead of 2.10 that the XT version used. VM/PC version 2, launched in November 1985, improved performance by up to 50%; it allowed add-on memory (in addition to the disk) to be used as a page cache for VM. A November 1985 Computerworld article noted that the machine was "slow selling". IBM 7437 VM/SP Technical Workstation In April 1988, IBM introduced a System/370 workstation that had been shipping to some customers since August 1987. Officially called the IBM 7437 VM/SP Technical Workstation (and later also known as the Personal System/370), it was a freestanding tower that connected to a MCA card installed in a PS/2 Model 60, 70, or 80. The 7437 tower contained the processor and a 16main memory, and the PS/2 provided I/O and disk storage. The 7437 ran the IBM VM/SP operating system, and one IBM representative described the 7437 "like a 9370 with a single terminal". It was intended for existing S/370 users and its November 1988 list price was $18,100 for a minimum 25-unit order. One of its intended roles was to provide a single-user S/370-compatible computer that could run computer-aided design and engineering applications that originated on IBM mainframes such as CADAM and CATIA. Graphics support was provided by an IBM 5080 graphics system, a floor-standing tower. The 5080 was connected to the 7437 through the PS/2 via a cable and MCA adapter. Personal/370 Later, IBM introduced the Personal/370 (aka P/370), a single slot 32-bit MCA card that can be added to a PS/2 or RS/6000 computer to run System/370 OSs (like MUSIC/SP, VM, VSE) parallel to OS/2 (in PS/2) or AIX (in RS/6000) supporting multiple concurrent users. It is a complete implementation of the S/370 Processor including a FPU co-processor and 16 MB memory. Management and standard I/O channels are provided via the host OS/hardware. An additional 370 channel card can be added to provide mainframe-specific I/O such as 3270 local control units, 3400/3480 tape drives or 7171 protocol converters. Although a single-card product, the P/370 ran three times faster than the 7437, attaining 3.5 MIPS, on par with a low-end IBM 4381. A subsequent book (by the same author) claims 4.1 MIPS for the P/370. The Personal/370 was available as early as November 1989 although on a "special bid basis". System/390 In 1995 IBM introduced a card, the "Enhanced S/390 MicroProcessor Complex", which supported IBM ESA/390 architecture on a PC-based system. IBM's PC-related products evolved to support that as well, employing the card (IBM part number 8640-PB0) in the "IBM PC Server 330 in 1998 and the IBM PC Server 500 models. S/390 Processor Card An important goal in the design of the S/390 Processor Card was complete compatibility with existing mainframe operating systems and software. The processor implements all of the ESA/390 and XA instructions which prevents the need for instruction translation. There are three generations of the card: The original S/390 Processor Card incorporated 32MB of dedicated memory, with optional 32MB or 96MB daughter cards, for a combined total of 64MB or 128MB of RAM. The processor was officially rated at 4.5 MIPS. It was built to plug into a MicroChannel host system. The second version was built for a PCI host system. It included 128 MB of dedicated memory as standard, and was still rated at 4.5 MIPS. The third version, referred to as a P/390E card (for Enhanced), included 256 MB of dedicated memory and was rated at 7 MIPS. It, too, was built for a PCI host system. There was an extremely rare (possibly only ever released as pre-production samples) 1 GB memory version of the P/390E card. R/390 R/390 was the designation used for the expansion card used in an IBM RS/6000 server. The original R/390 featured a 67 or 77 MHz POWER2 processor and 32 to 512 MB of RAM, depending on the configuration. The MCA P/390 expansion card can be installed in any MCA RS/6000 system, while the PCI P/390 card can be installed in a number of early PCI RS/6000s; all such configurations are referred to as an R/390. R/390 servers need to run AIX version 4 as the host operating system. P/390 P/390 was the designation used for the expansion card used in an IBM PC Server and was less expensive than the R/390. The original P/390 server was housed in an IBM PC Server 500 and featured a 90 MHz Intel Pentium processor for running OS/2. The model was revised in mid-1996 and rebranded as the PC Server 520, which featured a 133 MHz Intel Pentium processor. Both models came standard with 32 MB of RAM and were expandable to 256 MB. The PC Server 500 featured eight MCA expansion slots while the PC Server 520 added two PCI expansion slots and removed two MCA slots. S/390 Integrated Server The S/390 Integrated Server (aka S/390 IS) is a mainframe housed in a comparably small case (HxWxD are 82 x 52 x 111 cm). It became available from November 1998. It is intended for customers who do not require the I/O bandwidth and performance of the S/390 Multiprise 3000 (which has the same size). Only 256 MB of ECC Memory and a single CMOS main processor (performance about 8 MIPS) are used; the S/390 CPU used in the Integrated Server is in fact the P/390 E-card. A Pentium II is used as IOSP (I/O Service Processor). It supports four ESCON and to four parallel channels. Standard PCI and ISA slots are present. A maximum of 255 GB internal harddisks are supported (16x 18GB HDs, with 2x HDs for redundancy). The supported OSs are OS/390, MVS/ESA, VM/ESA and VSE/ESA. Fujitsu PC-based systems Fujitsu offers two based systems that make up the lower end of Fujitsu's S/390-based BS2000 mainframe product line. The SQ100 is the slower configuration, using dual-core 2.93GHz Intel Xeon E7220 processors, and is capable of up to 200RPF of performance. The SQ200 was introduced more recently, uses six-core 2.66GHz Xeon X7542 processors, and has performance of up to 700RPF. All Intel 64-based BS2000 mainframes can run Linux or Windows in separate partitions. Fujitsu also continues to make custom S/390-native processors and mainframe hardware for the high end of its BS2000 line. z/Architecture and today Since the late 1990s, PC processors have become fast enough to perform mainframe emulation without the need for a coprocessor card. There are currently several personal computer emulators available that support System/390 and z/Architecture. FLEX-ES by Fundamental Software emulates both System/390 (ESA/390) and z/Architecture. Claimed to be one of the most popular PC-based IBM-compatible mainframe products (as of 2006). While FLEX-ES is capable of running on most PC hardware, the licensing agreement requires that FLEX-ES must run on the machine with which it was sold; in the past, this included Compaq Proliant and HP servers, but today this is nearly always an approved IBM xSeries server or a ThinkPad laptop. Hercules, an open source emulator for the System/370, System/390, and z/Architecture instruction sets. It does however require a complete operating system in order to execute application programs. While IBM does not license its current operating systems to run on Hercules, earlier System/370 operating systems are in the public domain and can be legally run on Hercules. zPDT (System/z Personal Development Tool), an IBM offering allowing IBM PartnerWorld Independent Software Developers (ISVs) to legally run z/OS 1.6 (or higher), DB2 V8 (or higher), z/TPF, or z/VSE 4.1 (or higher) on PC-based machines that can be acquired based on a Linux emulation. IBM ZD&T (Z Development and Test Environment), an IBM offering provides an x86-based environment that emulates Z hardware and runs genuine z/OS software, offering unmatched application portability and compatibility. IBM Z Development and Test Environment can be used for education, demonstration, and development and test of applications that include mainframe components. The Z390 and zCOBOL is a portable macro assembler and COBOL compiler, linker, and emulator toolkit providing a way to develop, test, and deploy mainframe compatible assembler and COBOL programs using any computer that supports J2SE 1.6.0+ runtime. See also List of IBM products References External links P/390 and R/390 with OS/390: An Introduction (IBM Redbook) P/390, R/390, S/390 Integrated Server: OS/390 New User's Cookbook (IBM Redbook) S/390 Integrated Server - Hardware Announcement; September 8, 1998 VM/ESA Performance on P/390 and R/390 PC Server 520 and RS/6000 591 Detail pictures of a PC Server 500, on the private website of Alfred Arnold Detail pictures of a S/390 IS (incl. screenshot of console), on the private website of Michael J. Ross P/390 Information at '9595 Ardent Tool of Capitalism' IBM PC Server System/390 FAQ at '9595 Ardent Tool of Capitalism' zPDT: Introduction and Reference. (IBM Redbook) zPDT: User's guide (IBM Manual) zPDT worldwide distributor zPDT for Rational Developer for System z Unit Test Micro/370 - the chips used in the XT/370 A performance evaluation of the IBM 370/XT personal computer, NASA IBM PC compatibles Mainframe-compatible systems IBM System/360 mainframe line
68927
https://en.wikipedia.org/wiki/Economy%20of%20Bangladesh
Economy of Bangladesh
The Economy of Bangladesh is characterised as a developing market economy. It is the 33rd largest in the world in nominal terms, and 31st largest by purchasing power parity. It is classified among the Next Eleven emerging market middle income economies and a frontier market. In the first quarter of 2019, Bangladesh's was the world's seventh fastest-growing economy with a rate of 8.3% real GDP annual growth. Dhaka and Chittagong are the principal financial centres of the country, being home to the Dhaka Stock Exchange and the Chittagong Stock Exchange. The financial sector of Bangladesh is the third largest in the Indian subcontinent. Bangladesh is one of the fastest growing economies in the world as well as the fastest growing economy in South Asia. In the decade following 2004, Bangladesh averaged a GDP growth of 4.5%. This growth had been largely driven by its exports of ready made garments, remittances and the domestic agricultural sector. The country has pursued export-oriented industrialisation, with its key export sectors include textiles, shipbuilding, fish and seafood, jute and leather goods. It has also developed self-sufficient industries in pharmaceuticals, steel and food processing. Bangladesh's telecommunication industry has witnessed rapid growth over the years, receiving high investment from foreign companies. Bangladesh also has substantial reserves of natural gas and is Asia's seventh largest gas producer. Offshore exploration activities are increasing in its maritime territory in the Bay of Bengal. It also has large deposits of limestone. The government promotes the Digital Bangladesh scheme as part of its efforts to develop the country's growing information technology sector. Bangladesh is strategically important for the economies of Nepal and Bhutan, as Bangladeshi seaports provide maritime access for these landlocked regions and countries. China also views Bangladesh as a potential gateway for its landlocked southwest, including Tibet, Sichuan and Yunnan. Bangladesh is a member of the D-8 Organization for Economic Cooperation, the South Asian Association for Regional Cooperation, the International Monetary Fund, the World Bank, the World Trade Organization and the Asian Infrastructure Investment Bank. The economy faces challenges of infrastructure bottlenecks, bureaucratic corruption, and youth unemployment. Economic History Ancient Bengal Ancient Bengal was the site of several major Janapadas (kingdoms), while the earliest cities date back to the Vedic period. East Bengal—the eastern segment of Bengal—was a historically prosperous region. The Ganges Delta provided advantages of a mild, almost tropical climate, fertile soil, ample water, and an abundance of fish, wildlife, and fruit. The standard of living is believed to have been higher compared with other parts of South Asia. As early as the thirteenth century, the region was developing as an agrarian economy. Bengal was the junction of trade routes on the Southeastern Silk Road. Bengal Sultanate The economy of the Bengal Sultanate inherited earlier aspects of the Delhi Sultanate, including mint towns, a salaried bureaucracy and the jagirdar system of land ownership. The production of silver coins inscribed with the name of the Sultan of Bengal was a mark of Bengali sovereignty. Bengal was more successful in perpetuating purely silver coinage than Delhi and other contemporary Asian and European governments. There were three sources of silver. The first source was the leftover silver reserve of previous kingdoms. The second source was the tribute payments of subordinate kingdoms which were paid in silver bullion. The third source was during military campaigns when Bengali forces sacked neighboring states. The apparent vibrancy of the Bengal economy in the beginning of the 15th-century is attributed to the end of tribute payments to Delhi, which ceased after Bengali independence and stopped the outflow of wealth. Ma Huan's testimony of a flourishing shipbuilding industry was part of the evidence that Bengal enjoyed significant seaborne trade. The expansion of muslin production, sericulture and the emergence of several other crafts were indicated in Ma Huan's list of items exported from Bengal to China. Bengali shipping co-existed with Chinese shipping until the latter withdrew from the Indian Ocean in the mid-15th-century. The testimony of European travelers such as Ludovico di Varthema, Duarte Barbosa and Tomé Pires attest to the presence of a large number of wealthy Bengali merchants and shipowners in Malacca. Historian Rila Mukherjee wrote that ports in Bengal may have been entrepots, importing goods and re-exporting them to China.A vigorous riverine shipbuilding tradition existed in Bengal. The shipbuilding tradition is evidenced in the sultanate's naval campaigns in the Ganges delta. The trade between Bengal and the Maldives, based on rice and cowry shells, was probably done on Arab-style baghlah ships. Chinese accounts point to Bengali ships being prominent in Southeast Asian waters. A vessel from Bengal, probably owned by the Sultan of Bengal, could accommodate three tribute missions- from Bengal, Brunei and Sumatra- and was evidently the only vessel capable of such a task. Bengali ships were the largest vessels plying in those decades in Southeast Asian waters. All large business transactions were done in terms of silver taka. Smaller purchases involved shell currency. One silver coin was worth 10,250 cowry shells. Bengal relied on shiploads of cowry shell imports from the Maldives. Due to the fertile land, there was an abundance of agricultural commodities, including bananas, jackfruits, pomegranate, sugarcane, and honey. Native crops included rice and sesame. Vegetables included ginger, mustard, onions, and garlic among others. There were four types of wines, including coconut, rice, tarry and kajang. Bengali streets were well provided with eating establishments, drinking houses and bathhouses. At least six varieties of fine muslin cloth existed. Silk fabrics were also abundant. Pearls, rugs and ghee were other important products. The finest variety of paper was made in Bengal from the bark of mulberry trees. The high quality of paper was compared with the lightweight white muslin cloth.Europeans referred to Bengal as "the richest country to trade with". Bengal was the eastern pole of Islamic India. Like the Gujarat Sultanate in the western coast of India, Bengal in the east was open to the sea and accumulated profits from trade. Merchants from around the world traded in the Bay of Bengal. Cotton textile exports were a unique aspect of the Bengali economy. Marco Polo noted Bengal's prominence in the textile trade. In 1569, Venetian explorer Caesar Frederick wrote about how merchants from Pegu in Burma traded in silver and gold with Bengalis. Overland trade routes such as the Grand Trunk Road connected Bengal to northern India, Central Asia and the Middle East. Mughal Bengal Under Mughal rule, Bengal operated as a centre of the worldwide muslin, silk and pearl trades. Domestically, much of India depended on Bengali products such as rice, silks and cotton textiles. Overseas, Europeans depended on Bengali products such as cotton textiles, silks and opium; Bengal accounted for 40% of Dutch imports from Asia, for example. Bengal shipped saltpeter to Europe, sold opium in Indonesia, exported raw silk to Japan and the Netherlands, and produced cotton and silk textiles for export to Europe, Indonesia and Japan. Real wages and living standards in 18th-century Bengal were comparable to Britain, which in turn had the highest living standards in Europe. During the Mughal era, the most important centre of cotton production was Bengal, particularly around its capital city of Dhaka, leading to muslin being called "daka" in distant markets such as Central Asia. Bengali agriculturalists rapidly learned techniques of mulberry cultivation and sericulture, establishing Bengal as a major silk-producing region of the world. Bengal accounted for more than 50% of textiles and around 80% of silks imported by the Dutch from Asia, for example. Bengal also had a large shipbuilding industry. Indrajit Ray estimates shipbuilding output of Bengal during the sixteenth and seventeenth centuries at 223,250tons annually, compared with 23,061tons produced in nineteen colonies in North America from 1769 to 1771. He also assesses ship repairing as very advanced in Bengal. Bengali shipbuilding was advanced compared to European shipbuilding at the time. An important innovation in shipbuilding was the introduction of a flushed deck design in Bengal rice ships, resulting in hulls that were stronger and less prone to leak than the structurally weak hulls of traditional European ships built with a stepped deck design. The British East India Company later duplicated the flushed-deck and hull designs of Bengal rice ships in the 1760s, leading to significant improvements in seaworthiness and navigation for European ships during the Industrial Revolution. British Bengal The British East India Company, that took complete control of Bengal in 1793 by abolishing Nizamat (local rule), chose to develop Calcutta, now the capital city of West Bengal, as their commercial and administrative center for the Company-held territories in South Asia. The development of East Bengal was thereafter limited to agriculture. The administrative infrastructure of the late eighteenth and nineteenth centuries reinforced East Bengal's function as the primary agricultural producer—chiefly of rice, tea, teak, cotton, sugar cane and jute — for processors and traders from around Asia and beyond. Modern Bangladesh After its independence from Pakistan, Bangladesh followed a socialist economy by nationalising all industries, proving to be a critical blunder undertaken by the Awami League government. Some of the same factors that had made East Bengal a prosperous region became disadvantages during the nineteenth and twentieth centuries. As life expectancy increased, the limitations of land and the annual floods increasingly became constraints on economic growth. Traditional agricultural methods became obstacles to the modernisation of agriculture. Geography severely limited the development and maintenance of a modern transportation and communications system. The partition of British India and the emergence of India and Pakistan in 1947 severely disrupted the economic system. The united government of Pakistan expanded the cultivated area and some irrigation facilities, but the rural population generally became poorer between 1947 and 1971 because improvements did not keep pace with rural population increase. Pakistan's five-year plans opted for a development strategy based on industrialisation, but the major share of the development budget went to West Pakistan, that is, contemporary Pakistan. The lack of natural resources meant that East Pakistan was heavily dependent on imports, creating a balance of payments problem. Without a substantial industrialisation programme or adequate agrarian expansion, the economy of East Pakistan steadily declined. Blame was placed by various observers, but especially those in East Pakistan, on the West Pakistani leaders who not only dominated the government but also most of the fledgling industries in East Pakistan. Since Bangladesh followed a socialist economy by nationalising all industries after its independence, it underwent a slow growth of producing experienced entrepreneurs, managers, administrators, engineers, and technicians. There were critical shortages of essential food grains and other staples because of wartime disruptions. External markets for jute had been lost because of the instability of supply and the increasing popularity of synthetic substitutes. Foreign exchange resources were minuscule, and the banking and monetary systems were unreliable. Although Bangladesh had a large work force, the vast reserves of under trained and underpaid workers were largely illiterate, unskilled, and underemployed. Commercially exploitable industrial resources, except for natural gas, were lacking. Inflation, especially for essential consumer goods, ran between 300 and 400 percent. The war of independence had crippled the transportation system. Hundreds of road and railroad bridges had been destroyed or damaged, and rolling stock was inadequate and in poor repair. The new country was still recovering from a severe cyclone that hit the area in 1970 and caused 250,000 deaths. India came forward immediately with critically measured economic assistance in the first months after Bangladesh achieved independence from Pakistan. Between December 1971 and January 1972, India committed US$232 million in aid to Bangladesh from the politico-economic aid India received from the US and USSR. Official amount of disbursement yet undisclosed. After 1975, Bangladeshi leaders began to turn their attention to developing new industrial capacity and rehabilitating its economy. The static economic model adopted by these early leaders, however—including the nationalisation of much of the industrial sector—resulted in inefficiency and economic stagnation. Beginning in late 1975, the government gradually gave greater scope to private sector participation in the economy, a pattern that has continued. Many state-owned enterprises have been privatised, like banking, telecommunication, aviation, media, and jute. Inefficiency in the public sector has been rising however at a gradual pace; external resistance to developing the country's richest natural resources is mounting; and power sectors including infrastructure have all contributed to slowing economic growth. In the mid-1980s, there were encouraging signs of progress. Economic policies aimed at encouraging private enterprise and investment, privatising public industries, reinstating budgetary discipline, and liberalising the import regime were accelerated. From 1991 to 1993, the government successfully followed an enhanced structural adjustment facility (ESAF) with the International Monetary Fund (IMF) but failed to follow through on reforms in large part because of preoccupation with the government's domestic political troubles. In the late 1990s the government's economic policies became more entrenched, and some gains were lost, which was highlighted by a precipitous drop in foreign direct investment in 2000 and 2001. In June 2003 the IMF approved 3-year, $490-million plan as part of the Poverty Reduction and Growth Facility (PRGF) for Bangladesh that aimed to support the government's economic reform programme up to 2006. Seventy million dollars was made available immediately. In the same vein the World Bank approved $536 million in interest-free loans. The economy saw continuous real GDP growth of at least 5% since 2003. In 2010, Government of India extended a line of credit worth $1 billion to counterbalance China's close relationship with Bangladesh. Bangladesh historically has run a large trade deficit, financed largely through aid receipts and remittances from workers overseas. Foreign reserves dropped markedly in 2001 but stabilised in the US$3 to US$4 billion range (or about 3 months' import cover). In January 2007, reserves stood at $3.74 billion, and then increased to $5.8 billion by January 2008, in November 2009 it surpassed $10.0 billion, and as of April 2011 it surpassed the US$12 billion according to the Bank of Bangladesh, the central bank. The dependence on foreign aid and imports has also decreased gradually since the early 1990s. According to Bangladesh bank the reserve is $30 billion in August 2016. In last decade, poverty dropped by around one third with significant improvement in human development index, literacy, life expectancy and per capita food consumption. With economy growing close to 6% per year, more than 15 million people have moved out of poverty since 1992. Macro-economic trend This is a chart of trend of gross domestic product of Bangladesh at market prices estimated by the International Monetary Fund with figures in millions of Bangladeshi Taka. However, this reflects only the formal sector of the economy. Mean wages were $0.58 per man-hour in 2009. The following table shows the main economic indicators in 1980–2019. Inflation below 5% is in green. Economic sectors Agriculture Manufacturing and industry Many new jobs – mostly for women – have been created by the country's dynamic private ready-made garment industry, which grew at double-digit rates through most of the 1990s. By the late 1990s, about 1.5 million people, mostly women, were employed in the garments sector as well as Leather products specially Footwear (Shoe manufacturing unit). During 2001–2002, export earnings from ready-made garments reached $3,125 million, representing 52% of Bangladesh's total exports. Bangladesh has overtaken India in apparel exports in 2009, its exports stood at 2.66 billion US dollar, ahead of India's 2.27 billion US dollar and in 2014 the export rose to $3.12 billion every month. At the fiscal year 2018, Bangladesh has been able to garner US$36.67 billion export earnings by exporting manufactured goods, of which, 83.49 percent has come from the apparel manufacturing sector. Eastern Bengal was known for its fine muslin and silk fabric before the British period. The dyes, yarn, and cloth were the envy of much of the premodern world. Bengali muslin, silk, and brocade were worn by the aristocracy of Asia and Europe. The introduction of machine-made textiles from England in the late eighteenth century spelled doom for the costly and time-consuming hand loom process. Cotton growing died out in East Bengal, and the textile industry became dependent on imported yarn. Those who had earned their living in the textile industry were forced to rely more completely on farming. Only the smallest vestiges of a once-thriving cottage industry survived. Other industries which have shown very strong growth include the pharmaceutical industry, shipbuilding industry, information technology, leather industry, steel industry, and light engineering industry. Bangladesh's textile industry, which includes knitwear and ready-made garments (RMG) along with specialised textile products, is the nation's number one export earner, accounting for $21.5 billion in 2013 – 80% of Bangladesh's total exports of $27 billion. Bangladesh is 2nd in world textile exports, behind China, which exported $120.1 billion worth of textiles in 2009. The industry employs nearly 3.5 million workers. Current exports have doubled since 2004. Wages in Bangladesh's textile industry were the lowest in the world as of 2010. The country was considered the most formidable rival to China where wages were rapidly rising and currency was appreciating. As of 2012 wages remained low for the 3 million people employed in the industry, but labour unrest was increasing despite vigorous government action to enforce labour peace. Owners of textile firms and their political allies were a powerful political influence in Bangladesh. The urban garment industry has created more than one million formal sector jobs for women, contributing to the high female labour participation in Bangladesh. While it can be argued that women working in the garment industry are subjected to unsafe labour conditions and low wages, Dina M. Siddiqi argues that even though conditions in Bangladesh garment factories "are by no means ideal," they still give women in Bangladesh the opportunity to earn their own wages. As evidence she points to the fear created by the passage of the 1993 Harkins Bill (Child Labor Deterrence Bill), which caused factory owners to dismiss "an estimated 50,000 children, many of whom helped support their families, forcing them into a completely unregulated informal sector, in lower-paying and much less secure occupations such as brick-breaking, domestic service and rickshaw pulling." Even though the working conditions in garment factories are not ideal, they tend to financially be more reliable than other occupations and, "enhance women’s economic capabilities to spend, save and invest their incomes." Both married and unmarried women send money back to their families as remittances, but these earned wages have more than just economic benefits. Many women in the garment industry are marrying later, have lower fertility rates, and attain higher levels of education, then women employed elsewhere. After massive labour unrest in 2006 the government formed a Minimum Wage Board including business and worker representatives which in 2006 set a minimum wage equivalent to 1,662.50 taka, $24 a month, up from Tk950. In 2010, following widespread labour protests involving 60,000 workers in June 2010, a controversial proposal was being considered by the Board which would raise the monthly minimum to the equivalent of $50 a month, still far below worker demands of 5,000 taka, $72, for entry level wages, but unacceptably high according to textile manufacturers who are asking for a wage below $30. On 28 July 2010 it was announced that the minimum entry level wage would be increased to 3,000 taka, about $43. The government also seems to believe some change is necessary. On 21 September 2006 then ex-Prime Minister Khaleda Zia called on textile firms to ensure the safety of workers by complying with international labour law at a speech inaugurating the Bangladesh Apparel & Textile Exposition (BATEXPO). Many Western multinationals use labour in Bangladesh, which is one of the cheapest in the world: 30 euros per month compared to 150 or 200 in China. Four days is enough for the CEO of one of the top five global textile brands to earn what a Bangladeshi garment worker will earn in her lifetime. In April 2013, at least 1,135 textile workers died in the collapse of their factory. Other fatal accidents due to unsanitary factories have affected Bangladesh: in 2005 a factory collapsed and caused the death of 64 people. In 2006, a series of fires killed 85 people and injured 207 others. In 2010, some 30 people died of asphyxiation and burns in two serious fires. In 2006, tens of thousands of workers mobilized in one of the country's largest strike movements, affecting almost all of the 4,000 factories. The Bangladesh Garment Manufacturers and Exporters Association (BGMEA) uses police forces to crack down. Three workers were killed, hundreds more were wounded by bullets, or imprisoned. In 2010, after a new strike movement, nearly 1,000 people were injured among workers as a result of the repression. Shipbuilding and ship breaking Shipbuilding is a growing industry in Bangladesh with great potential. Due to the potential of shipbuilding in Bangladesh, the country has been compared to countries like China, Japan and South Korea. Referring to the growing amount of export deals secured by the shipbuilding companies as well as the low cost labour available in the country, experts suggest that Bangladesh could emerge as a major competitor in the global market of small to medium ocean-going vessels. Bangladesh also has the world's largest ship breaking industry which employs over 200,000 Bangladeshis and accounts for half of all the steel in Bangladesh. Chittagong Ship Breaking Yard is the world's second-largest ship breaking area. Khulna Shipyard Limited (KSY) with over five decades of reputation has been leading the Bangladesh Shipbuilding industry and had built a wide spectrum of ships for domestic and international clients. KSY built ships for Bangladesh Navy, Bangladesh Army and Bangladesh Coast Guard under the contract of ministry of defence. Finance Until the 1980s, the financial sector of Bangladesh was dominated by state-owned banks. With the grand-scale reform made in finance, private commercial banks were established through privatisation. The next finance sector reform programme was launched from 2000 to 2006 with focus on the development of financial institutions and adoption of risk-based regulations and supervision by Bangladesh Bank. As of date, the banking sector consisted of 4 SCBs, 4 government-owned specialized banks dealing in development financing, 39 private commercial banks, and 9 foreign commercial banks. Tourism The World Travel and Tourism Council (WTTC) reported in 2013 that the travel and tourism industry in Bangladesh directly generated 1,281,500 jobs in 2012 or 1.8 percent of the country's total employment, which ranked Bangladesh 157 out of 178 countries worldwide. Direct and indirect employment in the industry totalled 2,714,500 jobs, or 3.7 percent of the country's total employment. The WTTC predicted that by 2023, travel and tourism will directly generate 1,785,000 jobs and support an overall total of 3,891,000 jobs, or 4.2 percent of the country's total employment. This would represent an annual growth rate in direct jobs of 2.9 percent. Domestic spending generated 97.7 percent of direct travel and tourism gross domestic product (GDP) in 2012. Bangladesh's world ranking in 2012 for travel and tourism's direct contribution to GDP, as a percentage of GDP, was 142 out of 176. in 2014 125,000 tourists visited Bangladesh. This number is extremely low relative to total population. As of 22 May 2019 the total local population numbering 166,594,000 inhabitants. This gives a ratio of 1 tourist for every 1,333 locals. Information and Communication Technology Bangladesh's information technology sector is growing example of what can be achieved after the current government's relentless effort to create a skilled workforce in ICT sector. The ICT workforce consisted of private sector and freelance skilled ICT workforce. The ICT sector also contributed to Bangladesh's economic growth. The ICT adviser to the prime minister, Sajeeb Wazed Joy is hopeful that Bangladesh will become a major player in the ICT sector in the future. In the last 3 years, Bangladesh has seen a tremendous growth in the ICT sector. Bangladesh is a market of 160 million people with vast consumer spending around mobile phones, telco and internet. Bangladesh has 80 million internet users, an estimated 9% growth in internet use by June 2017 powered by mobile internet. Bangladesh currently has an active 23 million Facebook users. Bangladesh currently has 143.1 million mobile phone customers. Bangladesh has exported $800 million worth of software, games, outsourcing and services to European countries, the United States, Canada, Russia and India by 30 June 2017. The Junior Minister for ICT division of the Ministry of Post, Telecommunications and Information Technology said that Bangladesh aims to raise its export earnings from the information and communications technology (ICT) sector to $5 billion by 2021. Investment The stock market capitalisation of the Dhaka Stock Exchange in Bangladesh crossed $10 billion in November 2007 and the $30 billion mark in 2009, and US$50 billion in August 2010. Bangladesh had the best performing stock market in Asia during the recent global recession between 2007 and 2010, due to relatively low correlations with developed country stock markets. Major investment in real estate by domestic and foreign-resident Bangladeshis has led to a massive building boom in Dhaka and Chittagong. Recent (2011) trends for investing in Bangladesh as Saudi Arabia trying to secure public and private investment in oil and gas, power and transportation projects, United Arab Emirates (UAE) is keen to invest in growing shipbuilding industry in Bangladesh encouraged by comparative cost advantage, Tata, an India-based leading industrial multinational to invest Taka 1500 crore to set up an automobile industry in Bangladesh, World Bank to invest in rural roads improving quality of live, the Rwandan entrepreneurs are keen to invest in Bangladesh's pharmaceuticals sector considering its potentiality in international market, Samsung sought to lease 500 industrial plots from the export zones authority to set up an electronics hub in Bangladesh with an investment of US$1.25 billion, National Board of Revenue (NBR) is set to withdraw tax rebate facilities on investment in the capital market by individual taxpayers from the fiscal 2011–12. In 2011, Japan Bank for International Cooperation ranked Bangladesh as the 15th best investment destination for foreign investors. 2010–11 market crash The bullish capital market turned bearish during 2010, with the exchange losing 1,800 points between December 2010 and January 2011. Millions of investors have been rendered bankrupt as a result of the market crash. The crash is believed to be caused artificially to benefit a handful of players at the expense of the big players. Companies The list includes ten largest Bangladeshi companies by trading value (millions in BDT) in 2018. Composition of economic sectors The Bangladesh Garments Manufacturers and Exporters Association (BGMEA) has predicted textile exports will rise from US$7.90 billion earned in 2005–06 to US$15 billion by 2011. In part this optimism stems from how well the sector has fared since the end of textile and clothing quotas, under the Multifibre Agreement, in early 2005. According to a United Nations Development Programme report "Sewing Thoughts: How to Realize Human Development Gains in the Post-Quota World" Bangladesh has been able to offset a decline in European sales by cultivating new markets in the United States. "[In 2005] we had tremendous growth. The quota-free textile regime has proved to be a big boost for our factories," said BGMEA president S.M. Fazlul Hoque told reporters, after the sector's 24 per cent growth rate was revealed. The Bangladesh Knitwear Manufacturers and Exporters Association (BKMEA) president Md Fazlul Hoque has also struck an optimistic tone. In an interview with United News Bangladesh he lauded the blistering growth rate, saying "The quality of our products and its competitiveness in terms of prices helped the sector achieve such... tremendous success." Knitwear posted the strongest growth of all textile products in 2005–06, surging 35.38 per cent to US$2.82 billion. On the downside however, the sector's strong growth came amid sharp falls in prices for textile products on the world market, with growth subsequently dependent upon large increases in volume. Bangladesh's quest to boost the quantity of textile trade was also helped by US and EU caps on Chinese textiles. The US cap restricts growth in imports of Chinese textiles to 12.5 per cent next year and between 15 and 16 per cent in 2008. The EU deal similarly manages import growth until 2008. Bangladesh may continue to benefit from these restrictions over the next two years, however a climate of falling global textile prices forces wage rates the centre of the nation's efforts to increase market share. They offer a range of incentives to potential investors including 10-year tax holidays, duty-free import of capital goods, raw materials and building materials, exemptions on income tax on salaries paid to foreign nationals for three years and dividend tax exemptions for the period of the tax holiday. All goods produced in the zones are able to be exported duty-free, in addition to which Bangladesh benefits from the Generalised System of Preferences in US, European and Japanese markets and is also endowed with Most Favoured Nation status from the United States. Furthermore, Bangladesh imposes no ceiling on investment in the EPZs and allows full repatriation of profits. The formation of labour unions within the EPZs is prohibited as are strikes. Bangladesh has been a world leader in its efforts to end the use of child labour in garment factories. On 4 July 1995, the Bangladesh Garment Manufacturers and Exporters Association, International Labour Organization, and UNICEF signed a memorandum of understanding on the elimination of child labour in the garment sector. Implementation of this pioneering agreement began in fall 1995, and by the end of 1999, child labour in the garment trade virtually had been eliminated. The labour-intensive process of ship breaking for scrap has developed to the point where it now meets most of Bangladesh's domestic steel needs. Other industries include sugar, tea, leather goods, newsprint, pharmaceutical, and fertilizer production. The Bangladesh government continues to court foreign investment, something it has done fairly successfully in private power generation and gas exploration and production, as well as in other sectors such as cellular telephony, textiles, and pharmaceuticals. In 1989, the same year it signed a bilateral investment treaty with the United States, it established a Board of Investment to simplify approval and start-up procedures for foreign investors, although in practice the board has done little to increase investment. The government created the Bangladesh Export Processing Zone Authority to manage the various export processing zones. The agency currently manages EPZs in Adamjee, Chittagong, Comilla, Dhaka, Ishwardi, Karnaphuli, Mongla, and Uttara. An EPZ has also been proposed for Sylhet. The government has given the private sector permission to build and operate competing EPZs-initial construction on a Korean EPZ started in 1999. In June 1999, the AFL-CIO petitioned the U.S. Government to deny Bangladesh access to U.S. markets under the Generalized System of Preferences (GSP), citing the country's failure to meet promises made in 1992 to allow freedom of association in EPZs. International trade Recently, the COVID-19 pandemic has taken a heavy toll on almost all sectors of the economy, inter alia, most notably, it has caused a reduction of exports by 16.93 percent, and imports by 17 percent in the FY2019-20. In 2015, the top exports of Bangladesh are Non-Knit Men's Suits ($5.6B), Knit T-shirts ($5.28B), Knit Sweaters ($4.12B), Non-Knit Women's Suits ($3.66B) and Non-Knit Men's Shirts ($2.52B). In 2015, the top imports of Bangladesh are Heavy Pure Woven Cotton ($1.33B), Refined Petroleum ($1.25B), Light Pure Woven Cotton ($1.12B), Raw Cotton ($1.01B) and Wheat ($900M). In 2015, the top export destinations of Bangladesh are the United States ($6.19B), Germany ($5.17B), the United Kingdom ($3.53B), France ($2.37B) and Spain ($2.29B). In 2015, the top import origins are China ($13.9B), India ($5.51B), Singapore ($2.22B), Hong Kong ($1.47B) and Japan ($1.36B). Bangladeshi women and the economy As of 2014, female participation in the labour force is 58% as per World Bank data, and male participation at 82%. A 2007 World Bank report stated that the areas in which women's work force participation have increased the most are in the fields of agriculture, education and health and social work. Over three-quarters of women in the labour force work in the agricultural sector. On the other hand, the International Labour Organization reports that women's workforce participation has only increased in the professional and administrative areas between 2000 and 2005, demonstrating women's increased participation in sectors that require higher education. Employment and labour force participation data from the World Bank, the UN, and the ILO vary and often under report on women's work due to unpaid labour and informal sector jobs. Though these fields are mostly paid, women experience very different work conditions than men, including wage differences and work benefits. Women's wages are significantly lower than men's wages for the same job with women being paid as much as 60–75 percent less than what men make. One example of action that is being taken to improve female conditions in the work force is Non-Governmental Organisations. These NGOs encourage women to rely on their own self-savings, rather than external funds provide women with increased decision-making and participation within the family and society. However, some NGOs that address microeconomic issues among individual families fail to deal with broader macroeconomic issues that prevent women's complete autonomy and advancement. Historical statistics Bangladesh has made significant strides in its economic sector performance since independence in 1971. Although the economy has improved vastly in the 1990s, Bangladesh still suffers in the area of foreign trade in South Asian. Despite major impediments to growth like the inefficiency of state-owned enterprises, a rapidly growing labour force that cannot be absorbed by agriculture, inadequate power supplies, and slow implementation of economic reforms, Bangladesh has made some headway improving the climate for foreign investors and liberalising the capital markets; for example, it has negotiated with foreign firms for oil and gas exploration, bettered the countrywide distribution of cooking gas, and initiated the construction of natural gas pipelines and power stations. Progress on other economic reforms has been halting because of opposition from the bureaucracy, public sector unions, and other vested interest groups. The especially severe floods of 1998 increased the flow of international aid. So far the global financial crisis has not had a major impact on the economy. Foreign aid has seen a gradual decline over the last few decades but economists see this as a good sign for self-reliance. There has been a dramatic growth in exports and remittance inflow which has helped the economy to expand at a steady rate. Bangladesh has been on the list of UN Least Developed Countries (LDC) since 1975. Bangladesh met the requirements to be recognised as a developing country in March, 2018. Bangladesh's Gross National Income (GNI) $1,724 per capita, the Human Assets Index (HAI) 72 and the Economic Vulnerability (EVI) Index 25.2. Gross export and import See also Bangladesh Academy for Rural Development Electricity sector in Bangladesh Automotive industry in Bangladesh Bangladeshi RMG Sector Ceramics industry in Bangladesh Electronics industry in Bangladesh Federation of Bangladesh Chambers of Commerce & Industries List of companies of Bangladesh List of megaprojects in Bangladesh List of the largest trading partners of Bangladesh Ministry of Industries (Bangladesh) 3G (countries) Corruption in Bangladesh References External links Bangladesh Economic News Bangladesh Budget 2007 – 2008 Budget in Brief 2016–17 World Bank Summary Trade Statistics Bangladesh, 2007 Bangladesh
27349849
https://en.wikipedia.org/wiki/LuckyBackup
LuckyBackup
luckyBackup is a free backup application for Linux. It provides a GUI based on the cross-platform Qt framework and is not fundamentally console based or web based as many of the clients from the list of backup software are. The GUI is translated in many languages and is available in repositories of all major Linux distributions including Debian, Ubuntu, openSUSE, Fedora, Mandriva, Slackware and Gentoo. Awards - Distinctions 3 August 2009: 3rd place at the SourceForge Community Choice Awards as a "best new project" 2008–2010: Highest rated application at kde-apps.org References External links Free backup software Backup software for Linux Free software programmed in C++
55577641
https://en.wikipedia.org/wiki/Dungeons%203
Dungeons 3
Dungeons 3 is a video game developed by Realmforge Studios and published by Kalypso Media. It was released on October 13, 2017, for Microsoft Windows, macOS, Linux, PlayStation 4 and Xbox One. It is the sequel to the 2015 video game Dungeons 2 and the third installment of the Dungeons series. Dungeons 3 combines dungeon building with real-time strategy elements. Players are tasked with creating an underground dungeon for creatures to live in as well as defending their dungeon against invading heroes from the world above. They can also send creatures to the world above where they can take direct control of their units. The game's single-player campaign follows the story of Thalya, a dark elf sorcerer, on her quest to conquer the continent she lives on at the behest of the Dungeon Lord from the previous games. In multiplayer, players can either cooperate to play the campaign or compete against each other. Gameplay Dungeons 3 is a dungeon building video game with real-time strategy elements. The primary task is to build a dungeon, an underground lair which serves as a home to the player's units and their base of operations. In dungeon building mode, the player controls the world with a hand, performing actions such as moving creatures around the map, casting spells, and interacting with specific items. The hand is also used to designate parts of the underground for excavation and to create rooms. Commands are carried out by little imp-like creatures called snots, which are constantly regenerated by the dungeon heart which is the center of the dungeon and represents the player's health. It is the primary target for attacking heroes and its destruction leads to a game over. Additionally, the dungeon heart room stores gold and features a trap door that can be used to get rid of unwanted units. To build and maintain their dungeon, five different resources are available: Gold is gained by excavating gold veins in the underground and is used to pay for rooms, units and new technologies. Gobblers are little chicken-like creatures continuously created by a farm and serve as food for creatures. Toolboxes are used to make traps as well as to create certain contraptions in some rooms. Mana is generated by Demon units and is used to unleash powerful magic spells as well as revive Demon units and turn toolboxes into magic toolboxes for more powerful traps. Beer is brewed by Horde units in special rooms and serves to keep those units happy. "Evilness" is a resource generated by capturing certain places in the over-world and is needed to research stronger technologies and upgrades. With the exception of Thalya, the player's hero unit, all other units have to be unlocked by research and hired through an in-game menu. Almost all units demand a number of comforts, including a regular wage, rooms to sleep in, food and relaxation, and will go on strike if their needs are not met. The game features both neutral units and rooms, such as a treasury and farms, as well as three factions that provide their own specific advantages and disadvantages: The Horde allows the player to hire goblins, orcs and naga and provides access to training rooms and traps; Demons consist of imps, arachnids and succubi and allows the player to research upgrades as well as generate mana for spells and advanced traps; and The Undead faction offers banshees, vampires and lich creatures and rooms like a prison to incarcerate attacking heroes. Additionally, each faction has a singular powerful titan unit that can only be hired once. Even with upgrades, players can only recruit 20 units from all three factions combined as well as up to five converted heroes; additionally, undead skeletons and zombies can be created from fallen heroes that do not count towards this limit. While each faction offers their own technology tree, many aspects can be combined for better results: For example, heroes captured in the Undead prison can be tortured in the Demon torture chamber to switch sides and fight for the player and Undead crypts can serve to resurrect dead Horde creatures. Dead Demon units will be resurrected in a lair using mana and defeated Undead units will slowly regenerate in their graveyard. While in dungeon building mode, units cannot be controlled directly but picked up using the hand and dropped where they should be, for example to attack heroes that have entered the dungeon through one of the entrances. Each dungeon features one or more such entrances that also allow units to be sent to the world above by grabbing the units with the hand and placing them on the exit. Units that leave the dungeon reappear in the over-world map. Unlike in the dungeon, players can select units like in other real-time strategy games and issue commands directly as well as use their skills. Most maps require certain goals to be achieved on this map. Additionally, some missions feature no dungeon and have to be played with a fixed number of units provided. Events on both maps happen at the same time, forcing players to divide their attention. The game features twenty single-player missions as well as a co-op mode that allows two players to complete the campaign together. Additionally, the game contains a skirmish mode which allows battles against the computer on randomly generated maps as well as a multiplayer mode where players can compete against each other but not attack each other's dungeons (unlike in Dungeons 2). The game uses an isometric perspective that can be rotated by 90° to display the environment and uses two different mini-maps for dungeon and over-world view. Cut-scenes that tell the story use comic book style drawings instead of being rendered in the game's engine. Synopsis Setting and characters The game's story takes place a short time after the events of Dungeons 2 on a new continent ruled by the good paladin Tanos with the help of his friends, racist dwarf king Grimli, the vain paladin Elric the Pretty, and the greedy Yaina Overproud. Tanos has also adopted a young dark elf sorcerer named Thalya in an attempt to cure her inner darkness and prevent her from doing evil. The player's goal is to conquer the empire for the Dungeon Lord using Thalya as their lieutenant. Plot After conquering the lands above, the Dungeon Lord becomes bored and attempts to invade a new continent which fails due to the minions' incompetence in boat-building. The Dungeon Lord casts a spell that allows him to control Thalya, awakening her evil side. After she slaughters the people in her home village, she decides to let her dark side take over and begins serving the Dungeon Lord as its chief lieutenant. on her quest to punish Tanos for attempting to convert her. On this quest, Thalya gradually conquers more and more parts of the continent and dispatches of Tanos' friends one by one. Facing Tanos himself, she fails to defeat him and uses a spell to temporarily transport the Dungeon Lord to the continent where he battles with Tanos who then flees and becomes more fanatic in his attempts to destroy Thalya and her forces, sacrificing his own cities and people and calling upon dead heroes from Valhalla to fight for him instead. In the end, Thalya manages to defeat Tanos and subjugate the empire under the Dungeon Lord's will. Humor More so than its predecessors, Dungeons 3 contains a lot of allusions to other works of science-fiction and fantasy as well as meta-humor, which includes frequently breaking the fourth wall by both the narrator and Thalya in dialogue. The narrator (Kevan Brighting) also frequently argues with Thalya during missions and alters the missions based on these dialogues, such as having Thalya be attacked by bears he conjured out of thin air because she made a snippy comment at his expense. Allusions to other games of the same genre, especially Warcraft III, as well as other games, films, such as The Lord of the Rings, and fantasy franchises are numerous. Development Kalypso Media announced the development of Dungeons 3 on February 21, 2017 with a press release and an announcement teaser. The game was shown to reporters at Game Developers Conference 2017. According to the developer, the introduction of the over-world as an area to be captured in Dungeons 2 has proven a successful addition to the formula which is why Realmforge Studios concentrated on expanding upon this concept while keeping the changes that players liked from the last game. Dungeons 3 was released worldwide and for all supported platforms on October 13, 2017. It is the first entry in the series that was released on multiple platforms, i.e. Microsoft Windows, macOS, Linux, PlayStation 4 and Xbox One, at the same time. Downloadable content Realmforge Studios released multiple downloadable content (DLC) to Dungeons 3 that expand upon the story: Once Upon a Time was released on February 2, 2018 and pits Thalya against a variety of fairy tale characters in three newly designed missions. Evil of the Caribbean features a spoof of franchises like Pirates of the Caribbean and adds pirate-themed monsters and locations. It was released on March 6, 2018. Lord of the Kings, released April 20, 2018, features a new story reminiscent of The Lord of the Rings and the Arthur Legend. Clash of Gods was released on September 28, 2018. Unlike the three preceding DLCs which were all limited to three new missions, this expansion offers eight new missions as well as two new antagonists. The DLC also adds new gameplay mechanisms by allowing enemies to create outposts within the player's dungeon area as well as the ability to build defenses outside the dungeon and three new traps and three new spells. Additionally, a new multiplayer mode allows two players to plunder and raze an enemy city. An Unexpected DLC was announced and released on February 15, 2019. It features three new campaign maps, new enemies and a new boss, the Queen of the Forest. On May 17, 2019, Realmforge announced and released Famous Last Words, the last planned DLC for the game. Breaking the fourth wall completely, the DLC sees Thalya battling the Narrator who acts as the final boss. It features three new campaign maps and new enemy types. Despite previously announcing that Famous Last Words were to be the last DLC, Realmforge released a map pack entitled A Multitude of Maps on 21 February 2020. Reception The PC version of Dungeons 3 received "generally favorable" reviews according to review aggregator Metacritic. The Xbox One version received slightly worse reviews and is considered by Metacritic as having "mixed or average reviews". Multiple reviewers opined that Dungeons 3 was the closest the developer has come to creating a successor to Bullfrog Productions' iconic Dungeon Keeper series. Most reviewers noted the graphics and audio positively. Eurogamer.it emphasized that the game's cartoonish look and feel are consistent with the humor of the game and highlighted how the world is peaceful and idyllic when the good heroes rule it but instantly turns dark, gloomy and oppressive when the player's units capture parts of it. Other reviewers also lauded the colorful world and especially noted the level of details when units are in combat and magic effects are used. However, some critics also argued that improvements compared to Dungeons 2 were almost non-existent, making Dungeons 3 merely a polished version of its predecessor. Console versions were rated worse than the PC version, with 4Players especially criticizing the Xbox One version's graphical errors. Reviewers also lauded the improved user interface (UI) and ease of input, which includes easy input on consoles, something seldom achieved by strategy and simulation games that tend to be tailored to PC keyboard and mouse input. The UI drew minor criticisms as well though for having menus that block the whole screen when opened, thus making it harder to use them during battle. German magazine GameStar noted positively that the fact that different resources only exist in the over-world or in the underground forces players to care about both areas. Reviewers also positively noted the variety of missions, such as missions where enemies are far stronger during the day than they are at night or where the player has to prevent food from being delivered to starve an otherwise invincible enemy. The pacing of the campaign, giving players access to more rooms and technologies gradually, was also remarked upon positively. On the other hand, PC Games criticized the dungeon building aspect for always relying on the same pattern since there are no missions in which a scarcity of resources would force players to change their approach. That research was not saved and had to be repeated each mission was also seen as negatively forcing players into monotonous routine. Although the game's humor was praised by some reviewers, many critics noted that not all players will enjoy the type of humor and that the game relies too much on humor to the point of becoming annoying. PC Games lauded the new co-op mode but criticized that Dungeons 3, unlike its predecessor, no longer offered a multiplayer mode that allowed to attack another player's dungeon. References External links 2017 video games Dungeon management games Fantasy video games Linux games MacOS games Multiplayer and single-player video games PlayStation 4 games Simulation video games Strategy video games Video games developed in Germany Video games with downloadable content Windows games Xbox One games
727153
https://en.wikipedia.org/wiki/Operation%20Sundevil
Operation Sundevil
Operation Sundevil was a 1990 nationwide United States Secret Service crackdown on "illegal computer hacking activities." It involved raids in approximately fifteen different cities and resulted in three arrests and the confiscation of computers, the contents of electronic bulletin board systems (BBSes), and floppy disks. It was revealed in a press release on May 9, 1990. The arrests and subsequent court cases resulted in the creation of the Electronic Frontier Foundation. The operation is now seen as largely a public-relations stunt. Operation Sundevil has also been viewed as one of the preliminary attacks on the Legion of Doom and similar hacking groups. The raid on Steve Jackson Games, which led to the court case Steve Jackson Games, Inc. v. United States Secret Service, is often attributed to Operation Sundevil, but the Electronic Frontier Foundation states that it is unrelated and cites this attribution as a media error. The name comes from the Sun Devil Stadium of Arizona State University, near the local Secret Service headquarters from where the investigation and raids were coordinated. Background Prior to 1990, people who manipulated telecommunication systems, known as phreakers, were generally not prosecuted within the United States. The majority of phreakers used software to obtain calling card numbers and built simple tone devices in order to make free telephone calls. A small elite, highly technical segment of phreakers were more interested in information about the inner workings of the telecommunication system than in making free phone calls. Phone companies complained of financial losses from phreaking activities. The switch from analog to digital equipment began to expose more of the inner workings of telephone companies as hackers began to explore the switches and trunks. Due to a lack of laws and expertise on the part of American law enforcement, few cases against hackers were prosecuted until Operation Sundevil. However, starting in 1989, the US Secret Service (USSS), which had been given authority from Congress to deal with access device fraud as an extension of wire fraud investigations under Title 18 (§ 1029), began to investigate. Over the course of the 18-month-long investigation, the USSS gathered alleged evidence of rampant credit card and calling card fraud over state lines. Operation Sundevil allowed multiple federal law enforcement agencies, particularly the Secret Service and the FBI, to gain valuable expertise on fighting this new form of criminal activity as well as expanding the agencies' budgets. New laws were created to allow federal prosecutors to charge individuals accused of phreaking, hacking, wire, and credit card fraud. Evidence gained from Operation Sundevil allowed law enforcement to convince the United States Congress of the need for additional funding, training, and overall expansion. Action Along with the Chicago Task Force and the Arizona Organized Crime and Racketeering Bureau, the operation involved raids in Austin, Plano, Cincinnati, Detroit, Los Angeles, Miami, New York, Newark, Phoenix, Pittsburgh, Richmond, Tucson, San Diego, San Jose, San Francisco, and Seattle. The raids were centered in Arizona, where the press conference occurred. Raids generally took place in middle-class suburbs and targeted credit card thieves and telephone abusers. They were carried out by local police, with the aid of over 150 Secret Service agents, FBI and CIA. Twenty-seven search warrants, resulting in three arrests, were issued and executed on May 7 and 8, 1990. Police also took around 42 computers and approximately 25 BBSes, including some of the most infamous and elite hacking BBSs in the world at that time, such as Cloud Nine. This was the largest crackdown on electronic bulletin boards in world history. Finally, about 23,000 floppy disks were also seized. These held a variety of data, including software, phreaking and hacking tools, tens of thousands of credit card details, and a plethora of illegal copyrighted material. The three people arrested were "Tony the Trashman," "Dr. Ripco," and "Electra." Other parts of the operation targeted the underground ezine Phrack, which had published the contents of a proprietary text file copied from BellSouth computers and containing information about the E911 emergency response system, although this was later made null in a court case in which it was proven that the same information about the E911 system was also provided to the public through a mail-order catalog. Aftermath In a press release on May 9, 1990, officials from the federal government and the Arizona state government revealed that the Secret Service was involved in the investigation. The Assistant Director of the US Secret Service, Garry M. Jenkins, commented in a press release that, "the Secret Service is sending a clear message to those computer hackers who have decided to violate the laws of this nation in the mistaken belief that they can successfully avoid detection by hiding behind the relative anonymity of their computer terminals." Two public-access computer systems were shut down in the days following the operation: an AT&T Unix system in Dallas, Texassdf and a Jolnet system in Lockport, Illinois. Neither has been linked to the operation, however. An AT&T spokesman claimed the shutdown was a result of an internal investigation and was not related to the operation. In response to the arrests, the Electronic Frontier Foundation was founded by Mitchell Kapor, the founder of Lotus Development Corporation, and John Perry Barlow, an author. The foundation hired lawyers to represent the hackers in two of the cases arising from Operation Sundevil. Operation Sundevil was the most publicized action by the federal government against hackers. In part due to this, it has been seen as a public-relations stunt and a message to hackers. While it did little overall damage to the hacking community in the long run, it did stop the illicit hacking activities of many of the best hackers in the world for a short period of time, which is why it has been lauded as a tactical success due to the surprise and damage it caused to the communities in comparison to the long wars waged against the Legion of Doom. However, it has also been criticized as a failure due to several unsuccessful prosecutions, and a number of raids not even leading to arrest. See also Operation Cybersnare The Hacker Crackdown References 1990 in the United States Copyright enforcement Hacker groups Electronic Frontier Foundation Carding (fraud)
41296
https://en.wikipedia.org/wiki/Jitter
Jitter
In electronics and telecommunications, jitter is the deviation from true periodicity of a presumably periodic signal, often in relation to a reference clock signal. In clock recovery applications it is called timing jitter. Jitter is a significant, and usually undesired, factor in the design of almost all communications links. Jitter can be quantified in the same terms as all time-varying signals, e.g., root mean square (RMS), or peak-to-peak displacement. Also, like other time-varying signals, jitter can be expressed in terms of spectral density. Jitter period is the interval between two times of maximum effect (or minimum effect) of a signal characteristic that varies regularly with time. Jitter frequency, the more commonly quoted figure, is its inverse. ITU-T G.810 classifies jitter frequencies below 10 Hz as wander and frequencies at or above 10 Hz as jitter. Jitter may be caused by electromagnetic interference and crosstalk with carriers of other signals. Jitter can cause a display monitor to flicker, affect the performance of processors in personal computers, introduce clicks or other undesired effects in audio signals, and cause loss of transmitted data between network devices. The amount of tolerable jitter depends on the affected application. Metrics For clock jitter, there are three commonly used metrics: Absolute jitter The absolute difference in the position of a clock's edge from where it would ideally be. Maximum time interval error (MTIE) Maximum error committed by a clock under test in measuring a time interval for a given period of time. Period jitter (a.k.a. cycle jitter) The difference between any one clock period and the ideal or average clock period. Period jitter tends to be important in synchronous circuitry such as digital state machines where the error-free operation of the circuitry is limited by the shortest possible clock period (average period less maximum cycle jitter), and the performance of the circuitry is set by the average clock period. Hence, synchronous circuitry benefits from minimizing period jitter, so that the shortest clock period approaches the average clock period. Cycle-to-cycle jitter The difference in duration of any two adjacent clock periods. It can be important for some types of clock generation circuitry used in microprocessors and RAM interfaces. In telecommunications, the unit used for the above types of jitter is usually the unit interval (UI) which quantifies the jitter in terms of a fraction of the transmission unit period. This unit is useful because it scales with clock frequency and thus allows relatively slow interconnects such as T1 to be compared to higher-speed internet backbone links such as OC-192. Absolute units such as picoseconds are more common in microprocessor applications. Units of degrees and radians are also used. If jitter has a Gaussian distribution, it is usually quantified using the standard deviation of this distribution. This translates to an RMS measurement for a zero-mean distribution. Often, jitter distribution is significantly non-Gaussian. This can occur if the jitter is caused by external sources such as power supply noise. In these cases, peak-to-peak measurements may be more useful. Many efforts have been made to meaningfully quantify distributions that are neither Gaussian nor have a meaningful peak level. All have shortcomings but most tend to be good enough for the purposes of engineering work. In computer networking, jitter can refer to packet delay variation, the variation (statistical dispersion) in the delay of the packets. Types One of the main differences between random and deterministic jitter is that deterministic jitter is bounded and random jitter is unbounded. Random jitter Random jitter, also called Gaussian jitter, is unpredictable electronic timing noise. Random jitter typically follows a normal distribution due to being caused by thermal noise in an electrical circuit. Deterministic jitter Deterministic jitter is a type of clock or data signal jitter that is predictable and reproducible. The peak-to-peak value of this jitter is bounded, and the bounds can easily be observed and predicted. Deterministic jitter has a known non-normal distribution. Deterministic jitter can either be correlated to the data stream (data-dependent jitter) or uncorrelated to the data stream (bounded uncorrelated jitter). Examples of data-dependent jitter are duty-cycle dependent jitter (also known as duty-cycle distortion) and intersymbol interference. Total jitter Total jitter (T) is the combination of random jitter (R) and deterministic jitter (D) and is computed in the context to a required bit error rate (BER) for the system: , in which the value of n is based on the BER required of the link. A common BER used in communication standards such as Ethernet is 10−12. Examples Sampling jitter In analog-to-digital and digital-to-analog conversion of signals, the sampling is normally assumed to be periodic with a fixed period—the time between every two samples is the same. If there is jitter present on the clock signal to the analog-to-digital converter or a digital-to-analog converter, the time between samples varies and instantaneous signal error arises. The error is proportional to the slew rate of the desired signal and the absolute value of the clock error. The effect of jitter on the signal depends on the nature of the jitter. Random jitter tends to add broadband noise while periodic jitter tends to add errant spectral components, "birdys". In some conditions, less than a nanosecond of jitter can reduce the effective bit resolution of a converter with a Nyquist frequency of 22 kHz to 14 bits. Sampling jitter is an important consideration in high-frequency signal conversion, or where the clock signal is especially prone to interference. In digital antenna arrays ADC and DAC jitters are the important factors determining the direction of arrival estimation accuracy and the depth of jammers suppression. Packet jitter in computer networks In the context of computer networks, packet jitter or packet delay variation (PDV) is the variation in latency as measured in the variability over time of the end-to-end delay across a network. A network with constant delay has no packet jitter. Packet jitter is expressed as an average of the deviation from the network mean delay. PDV is an important quality of service factor in assessment of network performance. Transmitting a burst of traffic at a high rate followed by an interval or period of lower or zero rate transmission may also be seen as a form of jitter, as it represents a deviation from the average transmission rate. However, unlike the jitter caused by variation in latency, transmitting in bursts may be seen as a desirable feature, e.g. in variable bitrate transmissions. Video and image jitter Video or image jitter occurs when the horizontal lines of video image frames are randomly displaced due to the corruption of synchronization signals or electromagnetic interference during video transmission. Model-based dejittering study has been carried out under the framework of digital image and video restoration. Testing Jitter in serial bus architectures is measured by means of eye patterns. There are standards for jitter measurement in serial bus architectures. The standards cover jitter tolerance, jitter transfer function and jitter generation, with the required values for these attributes varying among different applications. Where applicable, compliant systems are required to conform to these standards. Testing for jitter and its measurement is of growing importance to electronics engineers because of increased clock frequencies in digital electronic circuitry to achieve higher device performance. Higher clock frequencies have commensurately smaller eye openings, and thus impose tighter tolerances on jitter. For example, modern computer motherboards have serial bus architectures with eye openings of 160 picoseconds or less. This is extremely small compared to parallel bus architectures with equivalent performance, which may have eye openings on the order of 1000 picoseconds. Jitter is measured and evaluated in various ways depending on the type of circuit under test. In all cases, the goal of jitter measurement is to verify that the jitter will not disrupt normal operation of the circuit. Testing of device performance for jitter tolerance may involve injection of jitter into electronic components with specialized test equipment. A less direct approach—in which analog waveforms are digitized and the resulting data stream analyzed—is employed when measuring pixel jitter in frame grabbers. Mitigation Anti-jitter circuits Anti-jitter circuits (AJCs) are a class of electronic circuits designed to reduce the level of jitter in a clock signal. AJCs operate by re-timing the output pulses so they align more closely to an idealized clock. They are widely used in clock and data recovery circuits in digital communications, as well as for data sampling systems such as the analog-to-digital converter and digital-to-analog converter. Examples of anti-jitter circuits include phase-locked loop and delay-locked loop. Jitter buffers Jitter buffers or de-jitter buffers are buffers used to counter jitter introduced by queuing in packet-switched networks to ensure continuous playout of an audio or video media stream transmitted over the network. The maximum jitter that can be countered by a de-jitter buffer is equal to the buffering delay introduced before starting the play-out of the media stream. In the context of packet-switched networks, the term packet delay variation is often preferred over jitter. Some systems use sophisticated delay-optimal de-jitter buffers that are capable of adapting the buffering delay to changing network characteristics. The adaptation logic is based on the jitter estimates computed from the arrival characteristics of the media packets. Adjustments associated with adaptive de-jittering involves introducing discontinuities in the media play-out which may be noticeable to the listener or viewer. Adaptive de-jittering is usually carried out for audio play-outs that include voice activity detection that allows the lengths of the silence periods to be adjusted, thus minimizing the perceptual impact of the adaptation. Dejitterizer A dejitterizer is a device that reduces jitter in a digital signal. A dejitterizer usually consists of an elastic buffer in which the signal is temporarily stored and then retransmitted at a rate based on the average rate of the incoming signal. A dejitterizer may not be effective in removing low-frequency jitter (wander). Filtering and decomposition A filter can be designed to minimize the effect of sampling jitter. Jitter signal can be decomposed into Intrinsic Mode Functions (IMFs), which can be further applied for filtering or dejittering. See also Clock drift Dither Jitterlyzer Micro stuttering Pulse (signal processing) References Further reading Li, Mike P. Jitter and Signal Integrity Verification for Synchronous and Asynchronous I/Os at Multiple to 10 GHz/Gbps. Presented at International Test Conference 2008. Li, Mike P. A New Jitter Classification Method Based on Statistical, Physical, and Spectroscopic Mechanisms. Presented at DesignCon 2009. Liu, Hui, Hong Shi, Xiaohong Jiang, and Zhe Li. Pre-Driver PDN SSN, OPD, Data Encoding, and Their Impact on SSJ. Presented at Electronics Components and Technology Conference 2009. Zamek, Iliya. SOC-System Jitter Resonance and Its Impact on Common Approach to the PDN Impedance. Presented at International Test Conference 2008. External links Beating the jitter bug Jitter in VoIP - Causes, solutions and recommended values An Introduction to Jitter in Communications Systems Jitter Specifications Made Easy A Heuristic Discussion of Fibre Channel and Gigabit Ethernet Methods Jitter in Packet Voice Networks Electrical parameters Packets (information technology) Synchronization
39425949
https://en.wikipedia.org/wiki/Gender%20inequality%20in%20the%20United%20States
Gender inequality in the United States
Gender inequality in the United States has been diminishing throughout its history and significant advancements towards equality have been made beginning mostly in the early 1900s. However, despite this progress, gender inequality in the United States continues to persist in many forms, including the disparity in women's political representation and participation, occupational segregation, and the unequal distribution of household labor. The alleviation of gender inequality has been the goal of several major pieces of legislation since 1920 and continues to the present day. As of 2021, the World Economic Forum ranks the United States 30th in terms of gender equality out of 149 countries. In addition to the inequality faced by women, inequality, prejudice, and violence against transgender men and women, as well as gender nonconforming individuals and non-binary individuals, are also prevalent in the United States. Transgender individuals suffer from prejudices in the workforce and employment, higher levels of domestic violence, higher rates of hate crimes, especially murder, and higher levels of police brutality when compared to the cisgender population. Current issues for women Social attitudes More recent research in 2012 has found that attitudes towards gender and societal roles have changed very little since the mid-1990s, with attitudes hovering at about sixty to seventy percent egalitarian. This study theorized that an "egalitarian but traditional" gender frame emerged in popular culture during this period, which supports each gender assuming their traditional roles without appearing sexist or discriminatory, and is responsible for this backlash. Benevolent sexism, sometimes referred to as chivalry, which holds women as something to be protected, also has psychological effects. Women who hold these views are more likely to have less ambitious career goals and men who hold these views tend to have a polarized and stereotyped view of women, made up of both very favorable and very unfavorable traits. In such cases, the stereotyped view of women is "favorable in content and yet prejudicial in [its] consequences," and attempts to provide justification for discriminatory behaviors presented as helpful or paternal. Sexual assault Research conducted at Lycoming College has found the enjoyment of sexist humor to be strongly correlated with sexual aggression towards women among male college students. In addition, studies have shown that exposure to sexist humor, particularly humor related to sexual assault, can increase male aggression and their tendency to discriminate against women. One study also asserted that the attitudes behind such humor creates an environment where such discriminatory and possibly violent behavior is acceptable. Men's tendency to self-report the likelihood that they would commit sexually violent acts has also been found to increase after exposure to sexist humor, as reported by researchers from the University of Kent. Political participation The Center for American Women and Politics reports that, as of 2013, 18.3% of congressional seats are held by women and 23% of statewide elective offices are held by women; while the percentage of Congress made up of women has steadily increased, statewide elective positions held by women have decreased from their peak of 27.6% in 2001. Women also make up, as of 2013, 24.2% of state legislators in the United States. Among the one hundred largest cities in the United States, ten had female mayors as of 2013. In 1977, political science professor Susan Welch presented three possible explanations for this underrepresentation of women in politics: one, that women are socialized to avoid careers in politics; two, that women's responsibilities in the home keep them away out of both the work force and the political arena; and three, women are more often than men members of other demographic groups with low political participation rates. In 2001, M. Margaret Conway, political science professor at the University of Florida, also presented three possible explanations for the continuation of this disparity: one, similar to Welch's first explanation, sociological and societal norm discourages women from running; two, women less frequently acquire the necessary skills to hold a political leadership position from nonpolitical activities; and three, gatekeeping in party politics prevents women from running. Workplace inequality The United States is falling behind other Western countries in the percentage of women engaged in the workforce. In 2020, the percent of American women working fell to its lowest levels since 1988. Researchers from the Institute for Women's Policy Research at the University of California Hastings College of Law argue that this growing gap is due to a lack of governmental, business and societal support for working women. They ranked the United States last out of 20 industrialized countries in an index that measured such programs as family leave, alternative work arrangements, part-time employment, and other means to make workplaces more flexible and family-friendly. The United States is also the only industrialized nation that does not have a paid parental leave policy mandated by law, and is one of only four countries worldwide that does not; in addition, fully paid maternity leave is only offered by around 16 percent of employers in the United States. Sex discrimination in employment Jane Wilke from the University of Connecticut found that men's support of the idea that men should be the sole source of income in a married couple decreased from 32 to 21 percent from 1972 to 1989; in practice only 15 percent of households were supported by a male spouse's income alone at the time of the study. Women continuously are being mistreated and sexually discriminated against explicitly in the workplace today. This has been an ongoing issue and will most likely continue until something changes in the occupational sphere. According to a study conducted by researchers at California State University, Northridge, when an individual with a PhD applies for a position at a university, that individual is significantly more likely to be offered a higher level of appointment, receive an offer of an academic position leading to tenure, and be offered a full professorship if they are a man when compared to a woman of comparable qualifications. However, these findings have been disputed, with multiple studies finding universities pushed to hire more women, resulting in females being given a 2:1 advantage over males in science, technology engineering and mathematics fields. Another study found that women were significantly less likely to receive a job offer or an interview for a high-paying waiter position when compared to equally qualified men; this study also found that such hiring discrimination may be caused in part by customer's discrimination of preference for male wait staff, but that it could not be concluded, since the male/female gap could be explained by the fact that more female waiters than male, such that the preferred hiring of male waiters could actually help equality. Similarly, research conducted at the University of California, Davis focusing on academic dermatology revealed a significant downward trend in the number of women receiving funding from the National Institutes of Health, which the authors concluded was due to a lack of support for women scientists at their home institutions. Research from Lawrence University has found that men were more likely to be hired in traditionally masculine jobs, such as sales management, and women were more likely to be hired in traditionally feminine jobs, such as receptionist or secretary. However, individuals of either gender with masculine personality traits were advantaged when applying for either masculine or feminine jobs, indicating a possibly valuing of stereotypically male traits above stereotypically female traits. Occupational segregation by gender Occupational gender segregation takes the form of both horizontal segregation (the unequal gender distribution across occupations) and vertical segregation (the overrepresentation of men in higher positions in both traditionally male and traditionally female fields). According to William A. Darity, Jr. and Patrick L. Mason, there is a strong horizontal occupational division in the United States on the basis of gender; in 1990, the index of occupational dissimilarity was 53%, meaning 53% of women or 47% of men would have to move to a different career field in order for all occupations to have equal gender composition. While women have begun to more frequently enter traditionally male-dominated professions, there have been much fewer men entering female-dominated professions; professor of sociology Paula England cites this horizontal segregation of careers as a contributing factor to the gender pay gap. Housework US women spend over twice as much time on housework as men, averaging an extra 65 minutes per day (7.6 hours per week) . If the women are employed, or highly-paid, they don't do less housework. In fact, when women work or earn more than their husbands, they do more housework. This has been explained as a way to make their career success less threatening, and reassert traditional sexuality. US women are reluctant to delegate housework to men partly because they believe that it won't be done properly. Women are, on average, more concerned about undone housework, an attitude gap that has been attributed to socialization and societies that hold women responsible for the state of the home. In households and societies where gender equality is more highly valued, less time overall is spent on housework. Stephanie Coontz, a professor of family history at Evergreen State College, noted that one of the factors contributing to the gender inequality in the United States is that most men still expect women and men to assume traditional gender roles in the households and for women to carry out a larger share of the housework. This has been confirmed by a number of other studies; for example Makiko Fuwa from University of California, Irvine noted that while there has been movement towards greater equality, "in 1995 American women still spent nearly twice as much time on housework than men" and there is also a segregation of household tasks. This gendered division of household labor creates what is known as the second shift or double burden, where working women in a heterosexual couple with a working partner spend significantly more time on childcare and household chores. Researchers from the University of Maryland have found that while men have steadily begun to perform more household labor since 1965, most of the essential and traditionally feminine tasks are still carried out by women; men generally carry out more nonessential or infrequent tasks, such as taking out the trash or mowing the lawn. While both genders tend to have roughly equal amounts of leisure time, men have more uninterrupted leisure time when compared to women. Childcare Researchers from the University of Michigan have found that from 1970 to 1985, the percentage of men and women who supported traditional social roles for wives and believed that maternal employment damages mother-child relationships or children's development decreased. Working mothers tend to get less sleep when compared to their working husbands. Even in households where both parents are working, women in heterosexual relationships do on average 65% of child care, and are 2.5 times as likely to wake up in the middle of the night to do so. Pay gap With regards to the gender pay gap in the United States, International Labour Organization notes as of 2010 women in the United States earned about 81% of what their male counterparts did. While the gender pay gap has been narrowing since the passage of the Equal Pay Act, the convergence began to slow down in the 1990s. In addition, overall wage inequality has been increasing since the 1980s as middle-wage jobs are decreasing replaced by larger percentages of both high-paying and low-paying jobs, creating a highly polarized environment. However numerous studies dispute the claim that discrimination accounts for the majority of the pay gap. When adjusting for industries commonly chosen, choices, hours worked, and benefits received, the pay gap returns to 5%, which has been attributed to less aggressive pay negotiating in women. It has been argued that these choices are the result of gender stereotypes. One study actually found that before 30, females made more than males, and hypothesized that choosing a family over a career resulted in the drop of the female wage advantage during the thirties. According to researchers at the University of California, Berkeley and the University of Illinois at Urbana–Champaign, the primary cause of this gap is discrimination manifested in the tendency of women to be hired more frequently in lower paying occupations, in addition to the fact that male dominated occupations are higher paying than female dominated occupations, and that, even within comparable occupations, women are often paid less than men. In medicine, female physicians are compensated less, despite the fact that evidence suggest that the quality of care female physicians provide may be higher than that of male physicians. In addition to the gender pay gap, a "family gap" also exists, wherein women with children receive about 10-15% less pay when compared to women without children. According to Jane Waldfogel, professor of social work and public affairs at Columbia University, this family gap is a contributing factor to the United States' large gender pay gap. She also noted that men did not seem to be affected by this gap, as married men (who are more likely to have children) generally earned higher than unmarried men. Racial pay gap The gender pay gap has continued to grow throughout the years due to a plethora of reasons. The gender pay gap refers to the median annual pay of all woman who work full-time and year-round, thus compared to the pay of a similar background of men. There is not one reason behind this gender pay gap, rather the pay gap is a result of many factors. Another topic of discussion regarding the gender pay gap is the racial pay gap that exists in our country today. Not only are women discriminated against for their gender, but also women are discriminated against for their race. The racial pay gap in the workplace is just another aspect of the pay gap issue that our society needs to overcome as soon as possible. Overall, different groups of women experience distinct gaps in pay in the workplace due to solely their race. Asian Generally, among women of all ethnicities and races, the hourly earnings of Asian and white women tend to be higher than African American and Hispanic women A 2016 study by the Pew Research Center reported that Asian women, on average, receive about 18 dollars an hour, while white women earn 17 dollars, African American women get 13 dollars, and Hispanic women receive 12 dollars. The study also found that in 2015 Asian women earned about 87 cents per dollar earned by white men in median hourly earnings. About 45 percent of Asian American mothers provide nearly 40 percent of their families' income, making these households reliant on this income. Hispanic Female Hispanic women earn wages far less than their women and male counterparts. According to the Institute for Women's Policy Research, in 2017, the median salary for a white male was $60,388, $46,513 for white women, and 32,002 for Latina women. They earn the lowest among all ethnicities including Asian and Black women workers. In 2017, for every 1 dollar a white male worker earns, a Hispanic woman earns 53 cents. Whereas, the white female employee makes 80.5 cents for each dollar a man makes. That is 47 percent less than white males and 31 percent less than white females. The Institute for Women's Policy Research stated that in 2016, 31.2 percent of Latina women were unmarried and the primary provider for their family and 21.3 percent were married and still the families' primary source of income. This struggle to provide, while earning low salaries, interferes with affording childcare services, and taking off days from work to take care of themselves or their children. Some of the most attractive jobs for Latinas includes retail and customer service. In sales, the median wage for white men is $20.12, whereas Latinas earn $11.44. In customer service, they earn $14.24, while their male counterparts earn $19.89. Hispanics put aside a day called "Latina Equal Pay Day", usually in November, that spreads awareness about the fact that Latinas have to work 10 months longer than a white male to make as much money as them. In terms of educational level, Latinas make less than the average white women. According to AAUW, the weekly income of white women with a bachelor's degree is $1,031, and Hispanic women with the same degree earn $915. No matter the education level, they earn less than their white female counterparts. Native American The pay gap between women and men is clearly evident. However, the several subgroups that pertain to women show how race, alongside gender, cause innate disadvantages in the workplace for certain individuals. In general, a woman would have to work nearly four extra months per year in order to earn the same annual salary an average man receives. For Native American women specifically, equal pay would not be received for an extra nine months per year. This notion is supported by the fact that the average Native American female earns approximately $0.58 per every individual dollar a white male earns. In comparison, a white woman earns $0.77 per each dollar earned by a white male. Historical wage disparities that derived from the unequal treatment of races from years prior still persist. This is argued as dangerous because it can also stem from the unequal placement of racial groups in specific neighborhoods, thus resulting in inferior education. In turn, this can be one of the causes, alongside discrimination, for why Native American women experience a larger than normal wage disparity. As well, this continues neighborhood association causes a continuous cycle of a more impoverished lifestyle for Native Americans. The annual wage gap between a Native American woman and a white woman is approximately $24,443. Given the fact that woman in general are already at an alarming wage gap compared to men, this large number indicates a serious disparity and large role race also plays in wage disparities within genders. As a benchmark, the median annual pay in the United States for a full-time, year round working Native American woman is $33,571. This, alongside the fact that 67% of Native American mothers are breadwinners, or in other words the person who's wages support the family, shows how the wage disparity has a direct effect on the impoverished lifestyle of certain racial groups. Caucasian The pay gap between Caucasian women and Caucasian men is substantial. In 2018, the median weekly pay for all Caucasian women with full-time job was $789 while the median weekly pay for men was $973. These numbers depict that on average, white women make around 81% of what white men do. While these numbers clearly describe the inequality between white men and white women, the pay gap between them is much closer than other races including Hispanic, Native American, American Indian and Native Hawaiian. New awareness of gender inequality in the workplace has caused the annual earnings for women to increase by 1.6% from 2016 to 2017. While women's median annual earnings may not increase significantly every year, it is generally on an upward trend. One of the biggest factors that leads to the gap between white men and women is parenting. While many white women are staying home to take care of their family, men are continuing to work and earn money. When white women eventually go back to work, their average earnings decrease around 39% compared to women who were not parenting a child. If the government were to implement free child care services, women would have a greater opportunity to work. Despite the gradual decrease in pay gap between men and women, the government needs to implement more laws so that this pay gap eventually disappears. A big difference in the salary earnings between Caucasian men and women is due to job titles and experience. Despite these differences, studies have shown that women will make $.98 to every man's $1 even if they have the same exact job title and experience. This is a prime example of the inequalities Caucasian women face in the workforce. Despite all of these inequalities for Caucasian women in the workplace, there are many ways to reduce the pay gap. One way the government can help women even the playing field would be to add mentoring programs. These programs could help women negotiate their salaries and working conditions. African American As of 2017, African American women make 61 cents to the white man's dollar. This is much lower than the earnings ratio for all women in this country, which is 80 cents to the white man's dollar. According to data from the Economic Policy Institute, the median earnings of full-time, full- year workers for black men is about $5,000 higher than the earnings of Black women. So even within their own race they also receive the lower end of the stick. In the top 25 states with the largest numbers of Black women working full-time, year-round, the pay compared to white men ranges from about 48 to 68 cents for every dollar. For example, in places like Washington, D.C. and Mississippi, Black women still only make 55 cents to the white man's dollar even though they have the highest share of full-time working black women. The Black female pay disparity is worth paying attention to because 80% of Black women are the sole source of income in their households. 4 million Black women are reported to be the heads of their households. Of this 4 million, 1.2 million are living below the poverty line. There are many reasons why these disparities exist. Black women are more likely to be found occupying lower paying service jobs than any other demographic, meaning that the chances of finding them in engineering or managerial positions, higher paying jobs, is very slim. Only 2.2% of all board seats in Fortune 500 companies are held by Black women. Very few black women advance to executive leadership positions due to lack of finding other black women in those positions to be a mentor or to network with, which is something that their white counterparts have more access to. Black women also report dealing with micro-aggressions and workplace harassment at high rates due to their race or to their sex by colleagues in the workplace. They constantly feel like they have to outwork or outperform their work counterparts in order to seen as equal in the work place. These barriers prevent African American women from occupying the working positions that they need to get to higher, better paying positions in this country. Education Literacy and enrollment in primary and secondary education are at parity in the United States, and women are overrepresented in tertiary education. There is, however, a notably gender segregation in degree choice, correlated with lower incomes for graduates with "feminine" degrees, such as education or nursing, and higher incomes for those with "masculine" degrees, such as engineering. Females started outnumbering males in higher education in 1992. Gender inequality in elementary and middle schools To study gender inequality in elementary and middle schools, researchers from NYU and Indiana University used data from the nationally representative Early Childhood Longitudinal Study, specifically the 1998 to 1999 and 2010 to 2011 cohorts. When comparing average math test scores of boys and girls in kindergarten, they did not discover an average gender gap. However, when looking at data from students in second or third grade, the researchers discovered that boys perform better on the math tests (by a standard deviation of 0.25 average gender gap). The researchers also found that when teachers were asked to compare a boy and a girl of the same socio-economic status and race who received the same scores on math tests and had similar behavioral records in school, teachers overwhelmingly stated that the boy had superior mathematical abilities, a finding replicated in a study over a decade later. It is therefore possible that the gender gap in mathematical abilities among kindergarteners could be much less apparent in the United States today if teachers exhibited less gender bias in evaluating students' abilities, as evidenced by these replicated studies that demonstrate a systematic undervaluing of girls' mathematical abilities by teachers, which has likely contributed to a false perception of girls' abilities as being lower than test scores would indicate. So often in our society, girls receive signals from an early age that they are not good at math, or that boys are simply better. This can occur at home, when wives ask their husbands for help when it comes to math. In 2013, women received 57% of all bachelor's degrees, however they only received 43% of math degrees, 19% of engineering degrees, and 18% of computer science degrees. At school and at home, many young girls receive the message that they either "have the math gene or they do not." When a mother tells her daughter that she wasn't good at math in school, oftentimes, the daughter's mathematical achievement will decrease. Oftentimes, women do not realize they are sending these messages to their daughters. Gender differences in degree choices Specific to university and college campuses, gender inequalities can be seen when looking into the demographics of particular majors. This initial entrance into a particular field of study is shaped by an individual's desires to take a certain set of classes, leading to somewhat specific career opportunities and subsequently, salaries. When exploring the depths of college majors, there is a tendency for each gender to cluster into certain majors. These stereotypical "masculine" and "feminine" degrees are key indicators to the inequalities women face in certain fields, specifically within the sciences and mathematics. Based on the Department of Education collections of data from the College Class of 2015, "women earn the large majority of degrees in health professions, psychology, education, English and communication, while men earn the large majority of degrees in engineering, computer science, and theology." These exact percentages can be seen in the table distributed by the Department of Education, exemplifying the overall majority men have in the STEM field. Although not designated for specific genders, majors and minors at universities and colleges carry different stigmas for who should/shouldn't be part of the program. The results of Sylvia Beyer's 1995 study surveying 154 female and 111 male students from the University of Wisconsin-Parkside, to test the accuracy of gender stereotypes, conclude that students believe men and women are concentrated in different fields and that specific majors are deemed "masculine" and "feminine." The participants categorized the masculine majors with computer science, chemistry, business, history, and mathematics, while they placed the feminine majors as psychology, communication, music, and art. These perceptions may not accurately describe the gender percentages in each field, but prove that men are more likely to be seen in STEM concentrations than women. Additionally, Yale researchers have published studies which prove that young male scientists are more likely to be favored than female scientists with the same candidacy. As of 2013, only of Physics PHDs were awarded to women and only 14% of physics professors were female. A large factor in the major and minor inequalities seen at the college level come from the encouragement of peers and educators to go forth in certain subjects. Ultimately, women are not receiving the same support and backing as their male counterparts, and thus, do not pursue STEM fields. Since a large sum of money lies in these occupations, women are not receiving an equal share, further perpetuating gendered salary inequalities. Gender inequality in graduate school expectations According to researchers at the University of Mississippi, undergraduate women have higher rates of expected educational attainment than men at the bachelors, masters, professional, and PhD degree levels. Women also report higher rates of marriage, desire to be married, parenthood, and lower average age of expected marriage. The study links these higher rates of a desire to have children with higher expectations for educational attainment at the masters and professional degree levels for females. Respondents who indicate they are likely to stay at home to help care for their children are less likely to expect a master's degree, professional degree, or PhD compared to those expecting to solely acquire a bachelor's degree. The latter category of respondents contains higher rates of women than men (by a ratio greater than 10:1) due to societal norms in the United States for women to take care of children in the family. Child-rearing may account for a contributing cause of inequality in educational outcomes among men and women in the United States. The research shows negative associations between both marriage and expectations of achieving a master's degree or PhD, and having children and expectations of achieving professional degrees or PhDs for women, but not men. A larger positive association was found between anticipated age at parenthood and an expectation to achieve a graduate or professional degree for women than for men. The relationship between expected age of marriage and expectation to achieve a PhD was also higher for women. While women earned a majority of total graduate school degrees in 2016 (57.5% female compared to 42.5% male), men still earned more graduate degrees among higher-paying disciplines, such as in business (54.9% male compared to 45.1% female), engineering, (75.3% male compared to 24.7% female) and mathematics and computer science (68.5% male compared to 31.5% female). Gender inequality in representation at elite institutions Women have recently surpassed men in the ratio of students enrolled in colleges around the nation. In 2017, women were awarded a majority of doctoral degrees, at 53%, for the ninth year in a row. With striking progress from the late 1900s, gender inequality in higher education persists with the consideration of female representation at elite universities. The higher representation of women is largely attributed to schools with "higher acceptance rates, lower faculty/student ratios, lower standardized test scores, and lower fees"; thus, underrepresentation still persists amongst the top schools in the nation. The effects of a less-notable degree proceed to play out in the workplace and job recruiting processes. Scholars have reasoned this inequality to be the effect of part-time enrollment and the advanced engineering characteristic of elite universities. With fewer females enrolling in STEM programs, they are less likely to attend universities notable for these programs. Other scholars argue that gender inequality at elite institutions is not an issue of access, accrediting the issue to decentralized school systems, such as the existence of female-only universities (which individually draw a large segment of women attending college). Gender inequality in faculty Women in academia face many challenges in terms of pay, rank, and composition in faculty. Although female faculty members' salaries are gradually increasing, the pay gap continues to widen or remain unchanged. The reason for this lies in the fact that male faculty members already earn significantly more than their female counterparts. The wage gap is greatest in private independent colleges. The explanations that have been brought forward for this persistent disparity relate to women's positions in institutions and disciplines. Within doctoral-granting institutions, containing the highest salary scales, women are outnumbered two to one. Women are extremely underrepresented in high-paying academic disciplines, like science, business, and law, yet overrepresented in low-paying fields, such as English, romance languages, and education. Not only is there a gap in salary but rank as well. Research has shown that top universities average only 34 percent female in full-time faculty. The disparity only worsens as faculty rank increases. Within top universities, the only category in which female faculty hold the higher proportion is "Non-Eligible for Tenure." In universities overall, men continue to make up a disproportionate ratio of full professors, while women make up a majority of assistant professors, instructors, and lecturers. Health care Gender inequality is still seen in health care, in cases of women seeking emergency room care for serious conditions such as stroke and heart attacks they are 33% more likely to receive a misdiagnosis in comparison to men (Miller). On top of receiving incorrect treatment, when seeking treatment for autoimmune disorders which affect more women than men. It takes an average of seeking care from five different doctors and a span of 4 years to receive a diagnosis (Miller). Women's health has come a long way and with help from the FDA and NIH making new policies to include more research gender specific for women, the gap is slowly closing. Current issues for men Achievement gap in school For the past fifty years, there has been a gap in the educational achievement of males and females in the United States, but which gender has been disadvantaged has fluctuated over the years. In the 1970s and 1980s, data showed girls trailing behind boys in a variety of academic performance measures, specifically in test scores in math and science. Data in the last twenty years shows the general trend of girls outperforming boys in academic achievement in terms of class grades across all subjects and college graduation rates, but boys scoring higher on standardized tests and being better represented in the higher-paying and more prestigious job fields like STEM (science, technology, engineering, and math). Graduation rates According to recent data (from 2007), 55 percent of college students are females and 45 percent are males. From 1995 until 2005, the number of males enrolled in college increased by 18 percent, while the number of female students rose by 27 percent. Males are enrolling in college in greater numbers than ever before, yet fewer than two-thirds of them are graduating with a bachelor's degree. The numbers of both men and women receiving a bachelor's degree have increased significantly, but the increasing rate of female college graduates exceeds the increasing rate for males. A higher proportion of men (29.4%) hold bachelor's degrees than women (26.1%). In 2007, the United States Census Bureau estimated that 18,423,000 males ages over the age of 18 held a bachelor's degree, while 20,501,000 females over the age 18 held one. In addition, fewer males held master's degrees: 6,472,000 males compared to 7,283,000 females. However, more men held professional and doctoral degrees than women. 2,033,000 males held professional degrees compared to 1,079,000, and 1,678,000 males had received a doctoral degree compared to 817,000 females. Selective service In the United States, most male US citizens and residents must register with the Selective Service System within 30 days of their 18th birthday. Those who fail to register may be punished by up to five years in prison and a fine of up to $250,000, although no non-registrants have been prosecuted since January 1986. They may also be ineligible for citizenship (for immigrants), federal student financial aid, federal job training and federal employment, and for certain states, state employment and even driver's licenses. Suicide In the United States, the male-to-female teenage suicide death ratio is estimated at 3:1. Typically males are three to five times more likely to commit suicide than females. The reported difference in suicide rates for males and females is partially a result of the methods used by each gender. Although females attempt suicide at a higher rate, they are more likely to use methods that are less immediately lethal. Males frequently die by suicide via high mortality actions such as hanging, carbon-monoxide poisoning, and firearms. This is in contrast to females, who tend to rely on drug overdosing. The higher male suicide rate is also associated with traditionally masculine norms such as hyper-independence, reluctance to seek help, emotional unexpressiveness and social exclusion, according to all the available research on the matter. A research paper done by the Portland University provided evidence that the risk of male suicide could be social environmental factors (unemployment, discrimination, poverty), homophobic alienation, and stressful jobs. The traditional male roles in combination with psychosocial stress are some of the main causes of the male suicide epidemic. Divorced men are 2x as likely to commit suicide as married men, whereas for women there is no statistically significant difference in the risk of suicide by marital status categories. Male suicide became especially apparent in the COVID-19 pandemic, which increased male suicide exponentially due to the vast increase in unemployment, joblessness, relationship strains, and failure at work. Homelessness At least 70% to 85% of all homeless are men. However, it is estimated that there is a larger female homeless population than counted, due to the fact that many women who would be considered homeless do not identify themselves as homeless and are not seen with groups of homeless people or in homeless shelters, because they fear that they will face increased chances of being sexually assaulted. Occupational segregation into dangerous jobs Men are over-represented in dangerous jobs. The industries with the highest death rates are mining, agriculture, forestry, fishing, and construction, all of which employ more men than women. In one U.S. study, 93% of deaths on the job involved men, with a death rate approximately 11 times higher than women. Disproportionate victimization from police brutality Men are vastly overrepresented in police shootings. According to a study done by Frank Edwards, Hedwig Lee, and Michael Esposito, the lifetime risk of being killed by police as a man in the United States is about 52 out of 100,000 (for black men it is nearly 1 in 1000); whereas for women, it is only 3 out of 100,000. After controlling for legal and extralegal characteristics, a study done by the journal Justice Quarterly found that young, black and Hispanic males are at increased risk for citations, searches, arrests, and uses of force by the police. Biases in the criminal justice system According to a study done in 2014, men (on average) receive 63% longer sentences for the same crime than women do. A meta-analysis done on experimental research about mock jurors found that it was advantageous for defendants (in sentencing) to be physically attractive, female, white, and of high socioeconomic status. After controlling for extensive criminological, demographic and socioeconomic variables, a paper done by the University of Georgia found that blacks, males and offenders with low education and low income receive substantially longer sentences. After analyzing data from 9,966 felony theft cases and 18,176 felony assault cases in California, a large gender disparity was found when females were more likely to similar males (and minorities) to get charge reductions and probation. A multivariate analysis done by the United States Sentencing Commission found that women of all races get much lighter sentencing than white male offenders. Other papers have confirmed the hypothesis that women get significantly more lenient sentences than men in the criminal justice system. Current issues for transgender people Visibility, awareness, and public attitudes One of the largest factors that causes and perpetuates transgender inequality is a lack of understanding and awareness among cisgender people. A 2002 survey found that, of the American respondents polled, only 70% had heard of the term transgender, while 67% agreed that it is possible for a person to be born as one gender, but inside feel like another gender. In addition, the survey found that 61% of Americans believe that the country needs anti-discrimination laws to protect transgender individuals, 57% incorrectly believed that it was not legal to fire someone on the basis of their gender identity if they are trans, 53% believed being transgender was acceptable while 37% did not, 77% believed that transgender students should be allowed to attend public school, and 8% said they would refuse to work with a transgender co worker. A 2012 study found that the heterosexual cisgender individuals who believe there are natural binary genders and there are natural differences between men and women are more likely to have negative attitudes toward transgender individuals. Events in the LGBT+ community such as Transgender Awareness Week and the International Transgender Day of Visibility are focused on educating and informing the public about transgender individuals and the challenges they face. Legal rights According to the Transformative Justice Law Project of Illinois, transgender people are "over-represented in the criminal legal system due to institutionalized oppression and increased poverty and criminalization." Many transgender individuals have difficulties correcting their name and gender on their ID and personal documents. According to the National Center for Transgender Equality, "only one-fifth (21%) of transgender people who have transitioned in the National Transgender Discrimination Survey have been able to update all of their IDs and records with their new gender and one-third (33%) had updated none of their IDs or records. At the time of the survey, only 59% had been able to update their gender on their driver's license or state ID; 49% had updated their Social Security Record; 26% their passport; and just 24% their birth certificate." In addition, those transgender people who are successful in correcting their ID and records often must undergo heavy invasions of privacy, including presenting proof of gender reassignment surgery, and those who cannot correct their identification documents often face higher levels of discrimination, since it effectively "outs" them as transgender. Some state appellate courts- including Kansas, Ohio, Texas, Florida, and Illinois- have upheld that the gender an individual is assigned at birth is their legal gender for life, even if the individual has undergone gender reassignment surgery or similar treatments, and therefore refuse to acknowledge the gender that transgender people identify as. There have been several legal cases in which transgender parents have lost custody and other parental rights on the basis of their gender. There have also been cases of the validity and legality of married heterosexual couples in which one partner is transgender being contested and, in some cases, the marriage has been voided. Work life and economics A 2007 study reported that between fifteen and fifty-seven percent of transgender individuals report some kind of employment discrimination; of these thirteen to fifty-six percent reported being fired due to their gender identity, thirteen to forty-seven percent reported that they were denied employment due to their gender identity, twenty-two to thirty-one percent reported harassment due to their gender identity, and nineteen percent reported being denied promotion due to their gender identity. Another study found that transgender respondents reported twice the national rate of unemployment, while transgender people of color reported four times the national rate of unemployment. This study also found that 90% of respondents reported some kind of workplace harassment, mistreatment or discrimination. Transgender pay gap According to the American Psychological Association, around 64% of transgender people have annual incomes of less than $25,000. Another study found that transgender individuals are nearly four times more likely to make less than $10,000 annually when compared to the general population; on the other end of the spectrum, only 14% of transgender respondents reported making more than $100,000 annually compared to 25% of the general population. In addition, transgender women reported their wages decreasing by nearly one-third following their gender transitions but transgender men reported their wages increasing slightly (about 1.5%), according to one study. Social life Since many public spaces, including schools, are highly gendered with features such as gendered bathrooms and locker rooms, transgender people often face violence in these gendered areas. Transgender people are often asked to present their ID or other invasive question when using a public restroom designated for the gender they identify as and can often face discrimination and violence if their ID has not been correct or if they do not "pass" as the gender they identify as. One study found that 71% of transgender respondents made efforts to hide their gender or gender transition to avoid discrimination, while 57% reported delaying their gender transition to avoid discrimination. Transgender individuals also face discrimination within the LGBT+ community, especially from cisgender gay men and lesbians. As a result, they often do not receive the same social support from the community that other queer individuals do. Education One study found that 78% of transgender individuals interviewed reported harassment in primary or secondary school, 35% reported physical assault, 12% reported sexual violence, and 6% reported being expelled. According to the study, the effect of this harassment was so severe that 15% of the respondents were forced to leave school at either the primary, secondary, or tertiary level. Transgender individuals also face barriers when applying to higher education, as was the case with a transgender woman rejected from the all-girls Smith College because she was not legally recognized as female in her home state. Health and violence Transgender individuals, especially transgender women, are at a high risk of suffering from domestic abuse due to invisibility, lack of access to support facilities such as shelters, and a lack of legal and social protection. Transgender individuals are also more likely to be sexually and physically assaulted, both by strangers and acquaintances, than cisgender individuals are. In addition, there are several factors that limit transgender people's access to health care facilities and proper medical care, including transphobia and the tendency of gender-segregated homeless and domestic violence shelters to refuse service to transgender and gender nonconforming individuals. The Anchorage Equal Rights Commission lost a lawsuit to force a woman's shelter to admit a transgender woman. One study reported that 19% of transgender individuals interviewed reported being refused medical care due to their gender identity, while 28% reported being harassed in a medical setting and 2% reported violence toward them in a medical setting due to their gender identity. In the same study, 50% percent of transgender respondents reported the need to educate their medical providers about the health care needs of transgender individuals. Transgender individuals also reported four times the national average of HIV infections when compared to cisgender individuals in one study conducted by the National Center for Transgender Equality and the National Gay and Lesbian Task Force. The NCAVP's 2012 Report on Lesbian, Gay, Bisexual, Transgender, Queer, and HIV-affected Hate Violence reported that over fifty percent of anti-LGBTQ homicide victims in 2012 were transgender women, a considerable increase from the percentage of transgender women victims in 2011 at 40%. In addition, the report also found that, compared to cisgender people, transgender people were more than three times more likely to experience police violence. In terms of mental health, transgender individuals have much higher rates of suicide attempts than cisgender individuals and it has been reported that between nineteen and twenty-five of the trans population have attempted suicide. Government policy In 1920, the Nineteenth Amendment to the United States Constitution, which insured women's suffrage (although some individual states allowed women the right to vote as early as 1869), was ratified. In addition, the Women's Bureau of the Department of Labor was created to monitor working conditions for women in the workforce. In 1961, the President's Commission on the Status of Women was started, initially chaired by Eleanor Roosevelt. This commission found that women were suffering considerable workplace discrimination. In 1963, the Equal Pay Act was passed, which made it illegal for a woman to be paid less than a man working in the same position. The Civil Rights Act of 1964 also made discriminatory hiring on the basis of gender illegal. The affirmative action policy of 1965 was expanded in 1967 to cover women as well as racial minorities. In 1973, women's right to safe and legal abortion was established by the Supreme Court's ruling in Roe v. Wade. In 1968, sex-segregated job advertisements were declared illegal by the Equal Employment Opportunity Commission, this decision was upheld by the Supreme Court in 1973; this allowed women to apply for higher-paying jobs formally restricted only to male applicants. In 1972, Title IX of the Education Amendments, which reads "No person in the United States shall, on the basis of sex, be excluded from participation in, be denied the benefits of, or be subjected to discrimination under any educational program or activity receiving federal financial assistance," was passed. In 1986, in the decision of Meritor Savings Bank v. Vinson, sexual harassment was established as illegal and discriminatory. The Family Medical Leave Act of 1993 guarantees that new parents can retain their jobs for 12 weeks after the birth of the child; this unpaid leave is the only form of paternal leave protected by law in the United States. In 1994, the Violence Against Women Act provided legal protection, as well as funds and services, for rape victims and victims of domestic violence. United States v. Virginia established in 1996 that gender-based admission practices violated the Fourteenth Amendment, and establishing a separate all-female school would not suffice as an alternative to integrating an all-male school. Most recently, in 2009 the Lilly Ledbetter Fair Pay Act of 2009 provides employees (usually female) who suffer from pay discrimination to file a complaint with the government. The Equal Rights Amendment, which reads, "Equality of rights under the law shall not be denied or abridged by the United States or by any State on account of sex", was first introduced to Congress in 1923 and successfully passed both houses of Congress in 1972. However, it failed to be ratified by an adequate number of states and died in 1982. The United States is one of only a few countries which have not ratified the UN Convention on the Elimination of All Forms of Discrimination against Women (US has only signed the treaty). Rankings The World Economic Forum's Gender Gap Index for 2012 ranked United States 22nd best out of 135 countries for gender equality. The primary indicators for inequality were related to political empowerment, where the US was ranked 55th (32nd for women in ministerial position and 78th for women in parliament). USA was ranked 33rd for health and survival, 8th for economic participation and opportunity, and tied for 1st (no inequality) in education. Since the Gender Gap report was first published in 2006, the US position remains relatively stable in that index. However, the United States' score decreased between 2011 and 2012. United Nation's Gender Inequality Index (part of the Human Development Report) shows that the US scored 19% in gender inequality in 2017 ranking in the 13th place out of 173 countries in terms of the Human Development Index. In addition, the OECD's Better Life Index discusses a number of differences, but does not stress any in particular when it comes to gender. See also Affirmative action Civil Rights Act Double burden Education Amendments of 1972, Title IX Employment discrimination law in the United States Equal Pay Act of 1963 Equal Rights Amendment Gender inequality Gender role Lilly Ledbetter Ruth Bader Ginsburg Work-family balance in the United States References Further reading External links New report documents persistent gender inequalities in U.S. media The percentage of women and men in each profession (U.S. Department of Labor, 2016) Inequality Women's rights in the United States Sexism in the United States United States
420994
https://en.wikipedia.org/wiki/Sigma
Sigma
Sigma (uppercase Σ, lowercase σ, lowercase in word-final position ς; ) is the eighteenth letter of the Greek alphabet. In the system of Greek numerals, it has a value of 200. In general mathematics, uppercase Σ is used as an operator for summation. When used at the end of a letter-case word (one that does not use all caps), the final form (ς) is used. In (Odysseus), for example, the two lowercase sigmas (σ) in the center of the name are distinct from the word-final sigma (ς) at the end. History The shape (Σς) and alphabetic position of sigma is derived from the Phoenician letter (shin). Sigma's original name may have been san, but due to the complicated early history of the Greek epichoric alphabets, san came to be identified as a separate letter in the Greek alphabet, represented as Ϻ. Herodotus reports that "san" was the name given by the Dorians to the same letter called "sigma" by the Ionians. According to one hypothesis, the name "sigma" may continue that of Phoenician samekh (), the letter continued through Greek xi, represented as Ξ. Alternatively, the name may have been a Greek innovation that simply meant 'hissing', from the root of (, from Proto-Greek *sig-jō 'I hiss'). Lunate sigma In handwritten Greek during the Hellenistic period (4th–3rd century BC), the epigraphic form of Σ was simplified into a C-like shape, which has also been found on coins from the 4th century BC onward. This became the universal standard form of sigma during late antiquity and the Middle Ages. Today, it is known as lunate sigma (uppercase Ϲ, lowercase ϲ), because of its crescent-like shape, and is still widely used in decorative typefaces in Greece, especially in religious and church contexts, as well as in some modern print editions of classical Greek texts. A dotted lunate sigma (sigma periestigmenon, Ͼ) was used by Aristarchus of Samothrace (220–143 BC) as an editorial sign indicating that the line marked as such is at an incorrect position. Similarly, a reversed sigma (antisigma, Ͻ), may mark a line that is out of place. A dotted antisigma (antisigma periestigmenon, Ͽ) may indicate a line after which rearrangements should be made, or to variant readings of uncertain priority. In unicode, the above variations of lunate sigma are encoded as ; ; ; and . Derived alphabets Sigma was adopted in the Old Italic alphabets beginning in the 8th century BC. At that time a simplified three-stroke version, omitting the lowermost stroke, was already found in Western Greek alphabets, and was incorporated into classical Etruscan and Oscan, as well as in the earliest Latin epigraphy (early Latin S), such as the Duenos inscription. The alternation between three and four (and occasionally more than four) strokes was also adopted into the early runic alphabet (early form of the s-rune). Both the Anglo-Saxon runes and the Younger Futhark consistently use the simplified three-stroke version. The forms of the Coptic letter sima (; 2nd century BC) and of Cyrillic letter es (С; 9th century AD) are derived from lunate sigma. Uses Language and linguistics In both Ancient and Modern Greek, the sigma represents the voiceless alveolar fricative . In Modern Greek, this sound is voiced to before , , , or . The uppercase form of sigma (Σ) was re-borrowed into the Latin alphabet—more precisely, the International African Alphabet—to serve as the uppercase of modern esh (lowercase: ʃ). In phonology, σ is used to represent syllables. In linguistics, Σ represents the set of symbols that form an alphabet (see also computer science). Science and mathematics Mathematics In general mathematics, lowercase σ is commonly used to represent unknown angles, as well as serving as a shorthand for "countably", whereas Σ is regularly used as the operator for summation, e.g.: In mathematical logic, is used to denote the set of formulae with bounded quantifiers beginning with existential quantifiers, alternating times between existential and universal quantifiers. This notation reflects an indirect analogy between the relationship of summation and products on one hand, and existential and universal quantifiers on the other. See the article on the arithmetic hierarchy. In statistics, σ represents the standard deviation of population or probability distribution (where mu or μ is used for the mean). In topology, σ-compact topological space is one that can be written as a countable union of compact subsets. In mathematical analysis and in probability theory, there is a type of algebra of sets known as σ-algebra (aka σ-field). Sigma algebra also includes terms such as: σ(A), denoting the generated sigma-algebra of a set A σ-finite measure (see measure theory) In number theory, σ is included in various divisor functions, especially the sigma function or sum-of-divisors function. In applied mathematics, σ(T) denotes the spectrum of a linear map T. In complex analysis, σ is used in the Weierstrass sigma-function. In probability theory and statistics, Σ denotes the covariance matrix of a set of random variables, sometimes in the form to distinguish it from the summation operator. Theoretical spectral analysis uses σ as standard deviation opposed to lowercase mu as the absolute mean value. Biology, physiology, and medicine In biology, the sigma receptor (σ–receptors) is a type of cell surface receptor. In biochemistry, the σ factor (or specificity factor) is a protein found in RNA polymerase. In bone physiology, the bone remodeling period—i.e., the life span of a basic multicellular unit—has historically been referred to as the sigma period In early 20th-century physiology literature, σ had been used to represent milliseconds Business, finance, and economics In finance, σ is the symbol used to represent volatility of stocks, usually measured by the standard deviation of logarithmic returns. In accounting, Σ indicates the balance of invoice classes and the overall amount of debts and demands. In macroeconomics, σ is used in equations to represent the elasticity of substitution between two inputs. In the machine industry, Six Sigma (6σ) is a quality model based on the standard deviation. Chemistry Sigma bonds (σ bonds) are the strongest type of covalent chemical bond. In organic chemistry, σ symbolizes the sigma constant of Hammett equation. In alchemy, Σ was sometimes used to represent sugar. Engineering and computer science In computer science, Σ represents the set of symbols that form an alphabet (see also linguistics) Relational algebra uses the values and to denote selections, which are a type of unary operation. In machine learning, σ is used in the formula that derives the Sigmoid function. In radar jamming or electronic warfare, radar cross-sections (RCS) are commonly represented as σ when measuring the size of a target's image on radar. In signal processing, σ denotes the damping ratio of a system parameter. In theoretical computer science, Σ serves as the busy beaver function. Physics In nuclear and particle physics, σ is used to denote cross sections in general (see also RCS), while Σ represents macroscopic cross sections [1/length]. The symbol is to denote the Stefan–Boltzmann constant. In relation to fundamental properties of material, σ is often used to signify electrical conductivity. In electrostatics, σ represents surface charge density. In continuum mechanics, σ is used to signify stress. In condensed matter physics, Σ denotes self-energy. The symbol can be used to signify surface tension (alternatively, γ or T are also used instead). In quantum mechanics, σ is used to indicate Pauli matrices. In astronomy, σ represents velocity dispersion. In astronomy, the prefix Σ is used to designate double stars of the Catalogus Novus Stellarum Duplicium by Friedrich Georg Wilhelm von Struve. In particle physics, Σ represents a class of baryons. Organizations During the 1930s, an uppercase Σ was in use as the symbol of the Ação Integralista Brasileira, a fascist political party in Brazil. Sigma Corporation uses the name of the letter but not the letter itself, but in many Internet forums, photographers refer to the company or its lenses using the letter. Sigma Aldrich incorporate both the name and the character in their logo. Character encoding Greek sigma Coptic sima Mathematical sigma These characters are used only as mathematical symbols. Stylized Greek text should be encoded using the normal Greek letters, with markup and formatting to indicate text style. See also Antisigma Greek letters used in mathematics, science, and engineering Sampi Sho (letter) − Template to produce the character Stigma (letter) Sibilant consonant Summation (Σ) Combining form "sigm-" (e.g. sigmodon, sigmurethra, etc.) Derivative "sigmoid" (e.g. sigmoid sinus, sigmoid colon, sigmoidoscopy, etc.) References Notes Citations Greek letters Letters with final form
26404784
https://en.wikipedia.org/wiki/University%20of%20Waterloo%20Faculty%20of%20Engineering
University of Waterloo Faculty of Engineering
The Faculty of Engineering is one of six faculties at the University of Waterloo in Waterloo, Ontario, Canada. It has 8,569 undergraduate students, 2021 graduate students, 334 faculty and 50,669 alumni making it the largest engineering school in Canada with external research funding from Canadian and international partners exceeding $86.8 million. The Faculty of Engineering houses 8 academic units (2 schools, 6 departments) and offers 15 degrees in a variety of disciplines. All undergraduate students are automatically enrolled in the co-operative education program, in which they alternate between academic and work terms throughout their five years of undergraduate career. History The faculty was originally conceived in 1957 as an extension of the then Waterloo College (now known as Wilfrid Laurier University). It was the college's first faculty, originally named the Faculty of Science and Engineering of the Waterloo College Associate Faculties. The first class consisted of 74 male engineering students. In October of 1957, the students started their work terms in what was the first co-operative education program in Canada. These students alternated being in school and working in industry every three months. The program was later changed to a four-month term system, and has been adopted by all other faculties in their co-op programs. In 1959, the "associate faculties" of Waterloo College separated and became the University of Waterloo and the former Faculty of Science and Engineering became the Faculty of Engineering. On Saturday, May 27, 1961, the faculty's first engineering graduates were awarded the Master of Applied Science in Electrical Engineering. William Cousins Miller, John Shewchun, Ferenc Takacs, and William Jacob Vetter received their degrees from Professor R. G. Stanton, Chairman, Faculty of Graduate Studies. In 2020, Dr. Mary Wells was appointed the Dean of Engineering. She was previously a professor in the Department of Mechanical and Mechatronics Engineering for 10 years. University of Waterloo Engineering partnered with other Ontario Faculties of Engineering to create the IBET PhD Project, to foster equitable and inclusive research environments to increase the presence of Indigenous and Black academics in STEM. The program has expanded to include 13 academic institutions across Canada and includes IBET Momentum fellowships and academic mentoring to help increase the number of Indigenous and Black academics at Canadian post-secondary institutions. Waterloo Engineering has one of the highest percentages of women in undergraduate programs in Canada. Undergraduate Programs There are 15 undergraduate programs available through the faculty: Architectural, Biomedical, Chemical, Civil, Computer, Electrical, Environmental, Geological, Management, Mechanical, Mechatronics, Nanotechnology, Software and Systems Design Engineering as well as Architecture. Within the engineering programs, there are options that can be taken in the later years of undergraduate studies. Options include mechanical, computer, electrical, or control systems for the Mechatronics Engineering program. Each of Waterloo Engineering's eight academic units (schools and departments) also offers graduate degrees. Architectural Engineering Architectural Engineering is the newest undergraduate program, introduced in September 2018. It was designed to produce engineers technically skilled in the whole scope of building design, construction, assessment, repair and refurbishment with an emphasis on communication, collaboration and design. There were 226 architectural engineering undergraduate students as of Fall, 2020. Architecture The academic program of the Architecture Program is pre-professional program that focuses on the culture and practice of design. Students learn the workings of society and culture, of the principles of physics, of materials and techniques of construction, of the human interaction with the natural and built environment, of historical process, of critical thought and of the diverse forms of creative expression. There were 381 undergraduate architecture students as of Fall, 2020. Biomedical Engineering The Biomedical Engineering program is part of the department of Systems Design Engineering. It is a interdisciplinary program that focuses on design, combing fields such as biochemistry, cellular physiology, anatomy, molecular biology, and physiology with engineering principles, the program centres around the creation and development of biomedical systems and devices. There were 379 undergraduate architecture students as of Fall, 2020. Chemical Engineering The Chemical Engineering program deals with the use and transformation of raw materials and energy. Students explore explore areas such as biotechnology, advanced materials, pollution control, alternative energy and other industry applications. There were 635 undergraduate architecture students as of Fall, 2020. Civil Engineering Offered only as a co-op program, the Civil Engineering program at University of Waterloo is administrated by the Department of Civil and Environmental Engineering. The program prepares its graduates for work in fields that include traffic engineering, structural design, building structures, bridges, sanitation (public health), structures, harbours, aerospace, highways (roads and streets), railroads, pipelines, foundations, tunnelling (rock mechanics), surveying and cartography, urban and regional planning and overall project planning. There were 624 civil engineering undergraduate students as of Fall, 2020. Computer Engineering Students in the Computer Engineering program, learn about hardware, software, and computing principles in the context of important devices and systems – systems that control phones, cars, planes, and robots. They share classes with Electrical Engineering students until the second semester of the second year, with the Computer Engineering class required to take Embedded Microprocessor Systems and Systems Programming. There are 1373 computer engineering students, making it the Faculty of Engineering's larges undergraduate program. Kankar Bhatacharya will be the chair of the Department of Electrical and Computer Engineering beginning in May, 2022. Electrical Engineering Students in the Electrical Engineering program learn fundamental skills in electronics, circuit analysis and electromagnetics. They share classes with Computer Engineering students until the second semester of the second yea, with Electrical Engineering students then required to take Advanced Calculus II, and Electrical Properties of Materials. For Fall 2020, Electrical Engineering had 445 undergraduate students registered.. Environmental Engineering Environmental Engineering is an accredited engineering program administrated by the Department of Civil and Environmental Engineering. The program is operated only as a co-op program and focuses on water management and treatment, solid waste management, remediation of surface water and groundwater systems, biotechnology and contaminant transport. The program offers elective courses in Earth Sciences, Biology, Chemistry, Ecology and Planning. There were 278 undergraduate students enrolled in the program as of Fall 2020. Geological Engineering The Geological Engineering Program is part of the Department of Civil and Environmental Engineering. More than half of the courses that students take are offered through the Faculty of Science. The Faculty of Engineering courses are taken together with environmental engineering students. Geological Engineering students move into their own stream in second year after taking the same first year courses as environmental and civil engineering students. There were 67 geological engineering undergraduate students as of Fall, 2020. Management Engineering The Management Engineering program at University of Waterloo is administrated by the department of Management Sciences. This program covers subjects such as Mathematical Optimization, Stochastic Modelling, Process Improvement and Operations Research. There were 419 civil engineering undergraduate students as of Fall 2020. Mechanical Engineering The Mechanical Engineering program at University of Waterloo is administrated by the department of Mechanical & Mechatronics Engineering. The Mechanical Engineering program is one of the largest engineering programs at the University of Waterloo with 1038 undergraduate students attending as of Fall 2020.. Graduates have a foundation in mechanics, power, control, and manufacturing. Mechatronics Engineering Mechatronics Engineering, commonly referred to as "Tron", is administrated by the Department of Mechanical and Mechatronics Engineering and was introduced to the University of Waterloo in 2003, with the first class graduating in 2008. It is a multidisciplinary field of study, combining computers with electro-mechanical technology in order to create robotic, wearable and autonomous systems. Though the program begins with students required to take a spectrum of mandatory courses, it later allows an extremely broad range of electives to enable specialization. Technical electives in 4th year include Artificial Intelligence, Autonomous Robotics, Multi-sensing Systems, Computer Architecture, Micro Electro Mechanical Systems, etc. Through these electives, students are able to strengthen skills within the mechanical, electrical, or computer aspect of Mechatronics. In the first year of undergraduate studies Mechatronics Engineering students are placed in either Stream 4 or 8 with the students in the former required to obtain their first co-op placement after one academic term as opposed to after two academic terms for stream 8 students. The department is currently chaired by Dr. Michael Collins and has 1071 students undergraduate enrolled in total as of January 2020. Nanotechnology Engineering Nanotechnology Engineering at the University of Waterloo is administrated by the Department of Chemical Engineering as well as the Department of Electrical and Computer Engineering. Founded in 2005, it is Canada’s first accredited undergraduate Nanotechnology Engineering program and the only program of its type in North America. Graduates work in many different fields such as nano-engineered materials, nano-electronics, nano-biosystems, and nano-instruments. In the first year of undergraduate studies, Nanotechnology is offered only in Stream 8. Built in 2012, the $160-million, 285,000-square-foot Mike & Ophelia Lazaridis Quantum-Nano Centre, (also known as Quantum Nano Centre, or simply QNC), is home to this engineering program. The QNC is composed of two main buildings designated for the Waterloo Institute for Quantum Computing (IQC) and Waterloo Institute for Nanotechnology (WIN) and it includes classrooms for instructional teaching and laboratories for research and development. The facilities operate with control for vibration, humidity, electromagnetic radiation, and temperature. Cleanroom facilities are constructed upon a separate building foundation to keep vibrations at less than a micron. Software Engineering Software Engineering at the University of Waterloo is an independent, interdisciplinary program supported by both the Faculty of Mathematics and the Faculty of Engineering, with 592 students enrolled as of Fall 2020.. Graduates of this program earn a Bachelor of Software Engineering (BSE) degree. This program applies computer science and engineering principles and practices to the creation, operation, and maintenance of software systems. Systems Design Engineering Systems Design Engineering at University of Waterloo is administered by the Department of Systems Design Engineering. Systems Design is focused on creation or modification of systems including human physiological and psychological systems, ecological systems, transportation systems, communication systems, energy systems and mechatronic systems. The program had 486 undergraduate enrolled in it as of Fall 2020. PHP creator Rasmus Lerdorfgraduated from this program in 1993. Academic Units The faculty presently houses six departments, one school, and one centre. They are the School of Architecture, Department of Chemical Engineering, Department of Civil and Environmental Engineering, Department of Electrical & Computer Engineering, Department of Management Sciences, Department of Mechanical and Mechatronics Engineering, Department of Systems Design Engineering, and Centre for Business, Entrepreneurship and Technology (CBET). Architecture joined the Faculty of Engineering after leaving Environmental Studies in May 2005, while Conrad School of Entrepreneurship & Business joined the faculty in June 2006, not having been previously affiliated to a faculty. School of Architecture The School of Architecture in the Faculty of Engineering at the University of Waterloo offers a Canadian Architectural Certification Board (CACB) accredited professional program consisting of a four-year Honours Bachelor of Architectural Studies followed by a Professional Master of Architecture. Located in Cambridge, Ontario, it offers a fully cooperative professional program and has been rated the greenest architecture curriculum in Canada. And it is the only Canadian school of architecture to have a permanent international facility, which has operated since 1979 in Rome, Italy. Conrad School of Entrepreneurship & Business University of Waterloo Conrad School of Entrepreneurship and Business (The Conrad School) is part of the Faculty of Engineering. The school offers a Master of Business, Entrepreneurship and Technology (MBET) program; professional development programs; research; and undergraduate-level academic courses. MBET alumnus include Noah Buchman - the founder of Propel Holdings, an online artificial intelligence fintech platform that provides loans to people with low credit ratings - and Rachel Bartholomew founder of Hyivy Health, the developers of a pelvic health rehabilitation system. Research Centres and Labs University of Waterloo Engineering's research collaborations include partnerships with almost 800 Canadian companies and 300 international companies as well as many leading universities. In 2019/2020 external research funding from Canadian and international partners exceeded $86.8 million. Research Centres Centre for Advanced Materials Joining Centre for Advancement of Trenchless Technologies Centre for Bioengineering and Biotechnology Centre for Control of Emerging Contaminants Centre for Intelligent Antenna and Radio Systems Centre for Pattern Analysis and Machine Intelligence Centre for Pavement and Transportation Technology Cybersecurity and Privacy Institute Games Institute Giga-to-Nano Electronics Institute for Computer Research Institute for Innovation Research Institute for Polymer Research Institute for Quantum Computing Water Institute Waterloo Artificial Intelligence Institute Waterloo Centre for Automotive Research Waterloo Centre for Groundwater Research Waterloo Centre for Microbial Research Waterloo Institute for Nanotechnology Waterloo Institute for Sustainable Energy Labs Autonomous Vehicle Research and Intelligence Lab (AVRIL) Engineering Ideas Clinic Sedra Student Design Centre Robohub Hack The North University of Waterloo Engineering hosts Hack The North, Canada's biggest hackathon that was launched in 2017. Over 3,000 students around the world participate, creating software or hardware projects from scratch in 36 hours. Hive Mind Program Originally conceived in 2020, the Hive Mind program is run out of Waterloo Engineering Outreach's Women In Engineering program. It offers online and one-on-one tutoring sessions to high school students of all genders in advanced functions, calculus, chemistry, and physics. Since its launch, over 60 percent of Hive participants have been students who identify as female. Student life Many students in the faculty participate in the University of Waterloo Engineering Society. Due to the co-op nature of the program, the society is split into two sub societies, named "Society A" and "Society B". Each term, one society is in academic term, and the other on co-op. The society in academic term runs events both on and off campus as well as services in the society's office. It runs charity events that include an annual "Bus Push" where student volunteers raise money by pulling a Grand River Transit bus 7 kilometres from campus to Kitchener City Hall. First-year students receive yellow hard hats during their orientation week symbolizing their adherence of the principles of engineering. The society's mascot is a 60" (1,524mm) pipe wrench known simply as the TOOL (formerly the RIDGID TOOL). TOOL appears at events organized by the Engineering Society. Engineering students are grouped into classes based on their program. Class sizes never exceed 150 students and vary between disciplines. There are over 100 students for Nanotechnology, Computer, Electrical, Mechanical, Mechatronics, Software and Systems Design Engineering, and smaller class sizes for Chemical, Environmental, Management, Civil and Geological Engineering as well as Architecture. University of Waterloo Engineering hosts speedfriending events that enable students to interact virtually. University of Waterloo Engineering students participate in engineering-related competitions. In 2016, a student team won the grand prize at the Hydrogen Education Foundation's 2016 Hydrogen Student Design Contest - a competition between teams from around the globe. In 2018, a University of Waterloo Engineering student team took first place in the Intercollegiate Rocket Engineering Competition at Spaceport America in New Mexico. In 2019, a team won the Team Sportsmanship award in the same competition. At the 2020 Canadian Engineering Competition, University of Waterloo Engineering student teams won first place in the Innovative Design category and the Programming challenge. And in 2021, a student team was at the top of the standings in VEX U robotics event operated by the Robotics Education and Competition Foundation. The team was ranked first among 77 teams from around the world. In 2022, a Waterloo Engineering student won the top prize in the student category of Build the Impossible, an international competition organized to increase visibility around the world to mass timber innovation. WatPD-Engineering (Formerly PDEng) The PDEng (Professional Development for Engineering Students) began in January 2005 as a series of courses completed by students in the Engineering faculty during their coop terms. On May 12, 2010, The Dean of Engineering recommeneded overhauling the program and it was replaced by WatPD-Engineering in the Winter 2011 term. These rules were applied for the transition: For Students that had a credit for PDEng 15 and PDEng 25 as of January 2011: Students who passed both PDEng 15 and PDEng 25 were to select the other three courses in their professional development program from: PDEng 35, 45, 55 and PD 3 to 7, depending on their availability. Students who passed PDEng 15 but not PDEng 25 as of January 2011 were to select one of the following two paths: PDEng 25, followed by three courses from the courses listed in 1. PD 20, PD 21 and then two courses from the PD elective suite. Students who had not passed any PDEng courses as of January 2011 were to select one of two paths: PDEng 15, PDEng 25 followed by three courses as listed in 1. PD 20, PD 21 followed by three courses as listed in 1. The WatPD-Engineering program is run by WatPD and is overseen by the WatPD-Engineering Curriculum Committee. Engineering students are required to complete five PD courses, all of which are completed during their work term. Once they have finished their compulsory courses, they must choose three elective courses from those offered in subjects ranging from communications to professionalism and ethics. Notable Alumni Notable Faculty Notes References External links Faculty of Engineering website University of Waterloo Engineering Society The Iron Warrior, Newspaper of the Engineering Society Waterloo Cases in Design Engineering Departments School of Architecture Department of Chemical Engineering Department of Civil and Environmental Engineering Department of Electrical & Computer Engineering Department of Management Sciences Department of Mechanical and Mechatronics Engineering Department of Systems Design Engineering Conrad School of Entrepreneurship & Business Disciplines Geological Engineering Program Nanotechnology Engineering Software Engineering Faculty of Engineering 1957 establishments in Ontario Waterloo
511068
https://en.wikipedia.org/wiki/Self-serving%20bias
Self-serving bias
A self-serving bias is any cognitive or perceptual process that is distorted by the need to maintain and enhance self-esteem, or the tendency to perceive oneself in an overly favorable manner. It is the belief that individuals tend to ascribe success to their own abilities and efforts, but ascribe failure to external factors. When individuals reject the validity of negative feedback, focus on their strengths and achievements but overlook their faults and failures, or take more credit for their group's work than they give to other members, they are protecting their self-esteem from threat and injury. These cognitive and perceptual tendencies perpetuate illusions and error, but they also serve the self's need for esteem. For example, a student who attributes earning a good grade on an exam to their own intelligence and preparation but attributes earning a poor grade to the teacher's poor teaching ability or unfair test questions might be exhibiting the self-serving bias. Studies have shown that similar attributions are made in various situations, such as the workplace, interpersonal relationships, sports, and consumer decisions. Both motivational processes (i.e. self-enhancement, self-presentation) and cognitive processes (i.e. locus of control, self-esteem) influence the self-serving bias. There are both cross-cultural (i.e. individualistic and collectivistic culture differences) and special clinical population (i.e. depression) considerations within the bias. Much of the research on the self-serving bias has used participant self-reports of attribution based on experimental manipulation of task outcomes or in naturalistic situations. Some more modern research, however, has shifted focus to physiological manipulations, such as emotional inducement and neural activation, in an attempt to better understand the biological mechanisms that contribute to the self-serving bias. History The theory of self-serving biases first came to attention in the late 1960s to early 1970s. As research on this topic grew, some people had concerns about it. In 1971, a fear emerged that the hypothesis would prove to be incorrect, much like the perceptual defense hypothesis by Dixon. However, the theory now holds strong. When this theory was still being developed it was during the research of attribution bias. Fritz Heider found that in ambiguous situations people made attributions based on their own needs, in order to maintain a higher self-esteem and viewpoint. This specific tendency became what we now know as the self-serving bias. Miller and Ross conducted a study in 1975 that was one of the earliest to assess not only self-serving bias but also the attributions for successes and failures within this theory. They argued that the self-serving bias people create is rational and not dependent on one's need for self-esteem. This means that if the outcome of an event is consistent with the person's expectation, then they will attribute dispositional (internal) factors. On the other hand, if the outcome of the event doesn't match the person's expectations, they will make situational attributions by blaming their surroundings instead of themselves. Methods Laboratory testing Investigations of the self-serving bias in the laboratory differ depending on the experimental goals, but have basic fundamental aspects. Participants perform some task, often of intelligence, social sensitivity, teaching ability, or therapy skills. Participants may be asked to work alone, in pairs, or in groups. After task completion, participants are given randomized bogus feedback. Some studies employ emotion-induction mechanisms to investigate moderating effects on the self-serving bias. Finally, participants make attributions for the given outcomes. These attributions are assessed by the researchers to determine implications for the self-serving bias. Neural experimentation Some more modern testing employs neural imaging techniques to supplement the fundamental self-serving bias laboratory procedures. Neural correlates of the self-serving bias have been investigated by electroencephalography (EEG), as well as functional magnetic resonance imaging (fMRI). These procedures allow for insight into brain area activity during exhibition of a self-serving bias, as well as a mechanism to differentiate brain activity between healthy and clinical populations. Naturalistic investigation Retrospective performance outcomes can be used in investigation of the self-serving bias. An example of this is reported company performance followed up by self-report of outcome attributions. These self-report attributions can then be used to assess how successes and failures are viewed by company employees and executives. This method can be used for numerous outcome variables to determine the presence or absence of the self-serving bias. Factors and variables Motivation Two types of motivation affect the self-serving bias: self-enhancement and self-presentation. Self-enhancement aims to uphold self-worth; attributing successes internally and failures externally helps people in their self-enhancement. Self-presentation refers to the drive to convey a desired image to others and make self-serving attributions to manage impressions. For example, they claim personal responsibility for successes but not failures in an attempt to influence how others perceive them. Motivation works in conjunction with cognitive factors to produce personally satisfying and self-preserving attributions for outcomes. Locus of control Locus of control is one of the main influences of attribution style. Individuals with an internal locus of control believe that they have personal control over situations and that their actions matter. Those with an external locus of control believe that outside forces, chance, and luck determine situations and that their actions cannot change anything. Individuals with an external locus of control are more likely to exhibit a self-serving bias following failure than those with an internal locus of control. The difference in attribution style between individuals with internal and external loci of control, however, is not as marked in successful outcomes, as individuals with both types attribution style have less need to defend their self-images in success. Airplane pilots with an internal locus of control were likely to exhibit a self-serving bias in regard to their skill and levels of safety. Gender Studies have shown a slight discrepancy in males' and females' use of the self-serving bias. In self-report surveys investigating partner interactions of romantic couples, men tended to attribute negative interactions to their partners more than women did. This is evidence that men may exhibit the self-serving bias more than women, although the study did not look at positive interaction attributions. Age Older adults have been shown to make more internal causal attributions for negative outcomes. Differential attribution style at different ages indicates that the self-serving bias may be less likely in older adults. These older adults who attributed negative outcomes to more internal factors also rated themselves to be in poorer health, so negative emotional factors may confound the found age effects. Culture There is evidence of cross-cultural differences in the tendency to exhibit the self-serving bias, particularly when considering individualistic (Western) versus collectivistic (non-Western) societies. Family and group goals are important in collectivistic cultures. In contrast, the individual goals and identity focused on in individualistic societies increases the need for people within those cultures to guard and boost their personal self-esteem. While differences have been shown, conflicting literature has cited similarity in causal attributions across both individual and collective cultures, specifically between Belgium, West Germany, South Korea, and England. Naturalistic observation and information comparing United States and Japanese companies outcome attributions shows that the meaning and psychological function of internal versus external attributions are similar across cultures but that the difference is in the strategy of attribution. No consensus has been reached on cross-culture influences on the self-serving bias, though some systematic differences do seem to be present, especially between Western and non-Western cultures. For example, a study conducted by Kudo and Numuzaki named “Explicit and Direct Self-Serving Bias in Japan Reexamination of Self-Serving Bias for Success and Failure” showed that the participants in the success condition provided more internal attributions than the participants in the failure condition even though past research has constantly shown that Japanese people don’t tend to show a self-serving bias. Also, another study conducted by Hugten and Witteloostuijn named “The foreign language effect on the self-serving bias: A field experiment in the high school classroom” displayed the results that student participants between the ages of 13-15 who mainly process feedback in a non-native English tend to show more self-serving bias than those who process feedback in their native Dutch language. Role Investigations of self-serving bias distinguish between the role of participants as the actor of a task or as the observer of someone else performing a task, relating closely to actor–observer asymmetry. Actors of a task exhibit the self-serving bias in their attributions to their own success or failure feedback, whereas observers do not make the same attributions about another person's task outcome. Observers tend to be more objective in their tendency to ascribe internal or external attributions to other people's outcomes. This may be due to the fact that the self-image of actors is challenged directly and therefore actors feel the need to protect their own self-image, but do not feel the same inclination to do so when the self-image of others is threatened. Self-esteem and emotion Emotions can influence feelings of self-esteem, which in turn alters the need to protect one's self-identity. Individuals with higher self-esteem are thought to have more to protect in their self-image, and therefore exhibit the self-serving bias more often than those individuals with lower self-esteem. In a study, participants who were induced to feel the emotions of guilt or revulsion were less likely to make self-serving attributions for success and less likely to make self-protecting attributions for failure. Coleman concluded that the two emotions of guilt and revulsion lead to a drop in self-esteem, and thus a reduction in the use of the self-serving bias. Self-awareness and probability of improvement The relationship between individuals' awareness levels and perceived probability of improvement also influences the activation of the self-serving bias. Individuals with high self-awareness attribute failure internally when they perceive a high probability of improvement. However, they will engage in self-serving bias, attributing failure externally when they perceive a low probability of improvement. Individuals low in self-awareness will attribute failure externally regardless of their perceived probability of improvement. Real-world implications Interpersonal relations Whether the self-serving bias is exhibited may depend on interpersonal closeness, relationships in a social context. When working in pairs to complete interdependent outcome tasks, relationally close pairs did not show a self-serving bias while relationally distant pairs did. A study on self-serving bias in relational context suggests this is due to the idea that close relationships place limits on an individual's self enhancement tendencies. The individual becomes more modest, when in a close relationship, and is less likely to use that relationship for his or her own benefit. Understanding why partners refrain from the self-serving bias is still in question but can partially be explained by favorable impression those in close relationships have for one another. A similar result was shown when looking at pairs of friends and strangers. Pairs performed an interdependent outcomes creativity test and were then given a bogus pipeline for a success or failure outcome. Strangers exhibited the self-serving bias in responsibility attributions, but friends tended to make joint attributions for both success and failure. Researchers have taken this as evidence for "boundaries on self-enhancement". In another study conducted in 2016, the implicit and explicit evaluation of 108 partners and exes as parents who were either married, separated or divorced was researched to investigate if the self-serving bias influenced them. Using two Implicit Association tests, one measuring Self vs Partner and the other measuring Self vs Ex, results showed that most of the time, men and women consider their exes or partners as less suitable parents which demonstrates the self-serving bias as they “ascribe more easily successes to internal factors” and failures towards their partners. Also, another result demonstrated that “women revealed a higher self-serving bias than men on both implicit and explicit measures, but only toward exes and not toward current partners” because they consider their exes as an out-group and their partners as an in-group. Workplace The self-serving bias can be found in several aspects of the workplace. Research shows that the self-serving bias is used to explain employment: being hired for a job is attributed to personal factors, whereas failure to obtain a job is attributed to external factors. Experimental investigation of the explanations for unemployment through asking participants to imagine particular job opportunities and likelihood of getting those jobs, however, did not show such a self-serving bias. Researchers claim that this may be due to the actor-observer role differences in the self-serving bias. Within the workplace, victims of serious occupational accidents tend to attribute their accidents to external factors, whereas their coworkers and management tend to attribute the accidents to the victims' own actions. Interpersonal dynamics of the self-serving bias in the previous section have implications for attributions for outcomes in the workplace. In an investigation of group dynamics, virtual group members had to complete a decision-making task via computer-mediated communication. Results showed that the self-serving bias was present in negative outcomes, and that greater interpersonal distance from group members increased blame for negative outcomes. Narcissism Studies revealed that narcissism was related to enhanced self-ratings of leadership, even when controlling for the Big Five traits. Another study showed that narcissism was related to enhanced leadership self-perceptions; indeed, whereas narcissism was significantly positively correlated with self-ratings of leadership, it was significantly negatively related to other ratings of leadership. This study also revealed that narcissism was related to more favorable self-ratings of workplace deviance and contextual performance compared to other (supervisor) ratings. Because narcissism broadly reflects strong self-admiration and behavioral tendencies which may not be viewed positively by others it is possible that narcissism influences self- and other perceptions differently, and insight into this possibility may be important given that differences in perceptions are the foundation for certain types of performance management and development practices. Classroom Studies in both lab and field settings have shown both teachers and students hold self-serving biases in regard to outcomes in the classroom. These attributions hold the potential for conflict between teacher and student since neither will take personal responsibility, as the student may blame the teacher while the teacher holds the student accountable. However, both teachers and students also reported being aware of the others' bias, which indicated there may be a feasible conflict resolution mechanism. Computer technology Computers have become an integral part of everyday life, and research has shown that individuals may subconsciously treat interactions with computers as they would treat a social situation. This finding combined with what is known about the self-serving bias in interpersonal relations indicates that consumers that use a computer to buy products will take personal credit for successful purchases but blame the computer for negative purchase experiences. It was also found, however, that consumers are more willing to attribute successful purchases to the computer and not ascribe blame to the computer for failed purchases if they have "intimate self-disclosure" with the computer, which Moon describes as revelation of personal information that makes the discloser feel vulnerable. Another reason is that people are so used to bad functionality, counterintuitive features, bugs, and sudden crashes of most contemporary software applications that they tend not to complain about computer problems. Instead, they believe it is their personal responsibility to predict possible issues and to find solutions to computer problems. This unique phenomenon has been recently observed in several human-computer interaction investigations. Sports Individuals have been shown to exhibit the self-serving bias in regard to sports outcomes. In one study, collegiate wrestlers at the Division I level made self-reported attributions of the outcomes in their preseason matches. Winners were more likely than losers to attribute the match outcome to internal causes. The researchers note that wrestling is a one-on-one sport and has clearly defined winners. Therefore, other sports of this nature may show similar results, but other team sports or sports with more ambiguous outcomes may not show the same pattern for the self-serving bias. In another study conducted in 1987, the research focused on comparing the self-serving attributions made by individuals who played single sports and those who played in teams. The study gathered 549 statements that were coded for attributional content from lone performers such as tennis and golf players and team performers such as baseball, football or basketball players. The results showed that “lone performers made more self-serving attributions than team performers” because their performance outcomes have a greater effect on their individual esteem unlike for group outcomes where it “must often be distributed among all participants.” To expand upon self-serving attributions made by team sports, a study conducted in 1980 coded “newspaper accounts of baseball and football for attributional content.” The coding of the newspaper accounts showed that there was a “tendency to make internal attributions for success and external attributions for failure” which supports the self-serving bias as about 75% of the attributions from winning teams were internal while about 55% of attributions from losing teams were internal. Depression Clinically depressed patients tend to show less of a self-serving bias than individuals in the general population. In a study exploring the effects of mood on the self-serving bias, the moods of participants were manipulated to be either positive or negative. Negative mood participants were less likely to attribute successful outcomes to the self than positive mood participants, attributing success to external factors. It has been suggested that the negative mood in depressed individuals as well as their self-focused attention explains why clinically depressed populations are less likely to exhibit the self-serving bias than normal populations. Impacts in negotiation Self-Serving bias is defined also by Kaplan et al. to individual’s preferences, which effects to his/her beliefs in an optimistic way. Kaplan et al. also defines, that SSB should be termed as ‘self-defeating’ phenomenon as it appears for persons, who does not give up. Furthermore, Kaplan et al. state, that specific type of self-serving bias is wishful thinking. This is typically present, when unpleasant surprise pops up in the negotiation, particularly when the opponent has made preparations carefully. Another example is well-known phenomenon from law-court and is commonly used also in law-based TV-series. We can say, that according to Kaplan et al.'s finding, that self-serving bias is playing a major role in negotiation context. There is an emotional effect to the negotiations and there seems to be a reflection between self-serving bias and emotions. Neural research outcomes Functional magnetic resonance imaging (fMRI) The self-serving bias has been investigated by the fMRI method in normal populations. Attributions using the bias show activation in the dorsal striatum, which plays a role in motivated behavior, as well as in the dorsal anterior cingulate. In clinically depressed patients, there appear to be weaker connections between the dorsomedial prefrontal cortex and limbic areas of the brain, so this connection may play a role in self-serving attributions. Electroencephalography (EEG) In a study employing the EEG method of examining brain activation, participants were given bogus outcome feedback that indicated either success or failure and told to make attributions. Different from non-self-serving responses, self-serving responses did not show increased dorsomedial frontal cortex activity preceding attribution decisions. Such lack of brain activity implies that self-control, which is controlled by the dorsomedial frontal cortex, is not as prominent in self-serving attributions as non-self-serving ones. See also References Cognitive biases Self
20570373
https://en.wikipedia.org/wiki/University%20of%20Computer%20Studies%2C%20Mandalay
University of Computer Studies, Mandalay
The University of Computer Studies, Mandalay (UCSM) (, ), located in Mandalay, is a Myanmar IT and computer science university. UCSM offers bachelor's, master's and doctoral degree programs in computer science and technology. The majority of its student body is from Upper Myanmar. Administered by the Ministry of Education, UCSM is the official university for all the Government Computer Colleges in Upper Myanmar, whose students may continue their advanced studies at UCSM. UCSM is also a Cisco networking academy. History UCSM was established in May 1997 and held its first classes in September 1997. Under Myanmar's system of specialized universities, UCSM was the first specialized university in Upper Myanmar for computer science and technology. Degree programmes UCSM's main offerings are five-year bachelor's programs in computer science and computer technology. It also offers master's degree programs in applied science and in information science. The areas of study include artificial intelligence, bioinformatics, computer architecture, control applications, database systems, digital signal processing, image processing, Internet technologies, network security, operating systems, parallel and distributed computing, and software engineering. Academic departments UCSM consists of four faculties and three academic departments. Faculty of Computer Science Faculty of Computer Systems and Technologies Faculty of Information Science Faculty of Computing Department of Information Technology Support and Maintenance Department of Languages Department of Natural Science Affiliations Senior universities Keio University of Japan University of Computer Studies, Yangon (UCSY) Colleges The following Upper Myanmar-based Government Computer Colleges are officially affiliated with UCSM. Their qualified graduates can continue their advanced studies at UCSM. Computer University, Myitkyina Computer University, Bhamo Computer University, Kalay Computer University, Monywa Computer University, Mandalay Computer University, Pakokku Computer University, Lashio Computer University, Kengtung Computer University, Taunggyi Computer University, Panglong Computer University, Meiktila Computer University, Magway References Universities and colleges in Mandalay Technological universities in Myanmar Educational institutions established in 1997 1997 establishments in Myanmar
18075460
https://en.wikipedia.org/wiki/Operating%20Systems%3A%20Design%20and%20Implementation
Operating Systems: Design and Implementation
Operating Systems: Design and Implementation is a computer science textbook written by Andrew S. Tanenbaum, with help from Albert S. Woodhull. The book describes, in detail, the topic of Operating System Designing, It includes Tanenbaum's MINIX, a free Unix-like operating system designed for teaching purposes. Publisher is Prentice Hall (1987). The source code for MINIX was included as part of the original 719 pages of text. Later versions of the three editions also included loadable disks with MINIX. See also List of important publications in computer science History of Linux Tanenbaum–Torvalds debate References 1987 non-fiction books Books on operating systems Engineering textbooks MINIX Prentice Hall books
25982350
https://en.wikipedia.org/wiki/Avizo%20%28software%29
Avizo (software)
Avizo (pronounce: ‘a-VEE-zo’) is a general-purpose commercial software application for scientific and industrial data visualization and analysis. Avizo is developed by Thermo Fisher Scientific and was originally designed and developed by the Visualization and Data Analysis Group at Zuse Institute Berlin (ZIB) under the name Amira. Avizo was commercially released in November 2007. For the history of its development, see the Wikipedia article about Amira. Overview Avizo is a software application which enables users to perform interactive visualization and computation on 3D data sets. The Avizo interface is modelled on the visual programming. Users manipulate data and module components, organized in an interactive graph representation (called Pool), or in a Tree view. Data and modules can be interactively connected together, and controlled with several parameters, creating a visual processing network whose output is displayed in a 3D viewer. With this interface, complex data can be interactively explored and analyzed by applying a controlled sequence of computation and display processes resulting in a meaningful visual representation and associated derived data. Application areas Avizo has been designed to support different types of applications and workflows from 2D and 3D image data processing to simulations. It is a versatile and customizable visualization tool used in many fields: Scientific visualization Materials Research Tomography, Microscopy, etc. Nondestructive testing, Industrial Inspection, and Visual Inspection Computer-aided Engineering and simulation data post-processing Porous medium analysis Civil Engineering Seismic Exploration, Reservoir Engineering, Microseismic Monitoring, Borehole Imaging Geology, Digital Rock Physics (DRP), Earth Sciences Archaeology Food technology and agricultural science Physics, Chemistry Climatology, Oceanography, Environmental Studies Astrophysics Features Data import: 2D and 3D image stack and volume data: from microscopes (electron, optical), X-ray tomography (CT, micro-/nano-CT, synchrotron), neutron tomography and other acquisition devices (MRI, radiography, GPR) Geometric models (such as point sets, line sets, surfaces, grids) Numerical simulation data (such as Computational fluid dynamics or Finite element analysis data) Molecular data Time series and animations Seismic data Well logs 4D Multivariate Climate Models 2D/3D data visualization: Volume rendering Digital Volume Correlation Visualization of sections, through various slicing and clipping methods Isosurface rendering Polygonal meshes Scalar fields, Vector fields, Tensor representations, Flow visualization (Illuminated Streamlines, Stream Ribbons) Image processing: 2D/3D Alignment of image slices, Image registration Image filtering Mathematical Morphology (erode, dilate, open, close, tophat) Watershed Transform, Distance Transform Image segmentation 3D models reconstruction: Polygonal surface generation from segmented objects Generation of tetrahedral grids Surface reconstruction from point clouds Skeletonization (reconstruction of dendritic, porous or fracture network)</ref> Surface model simplification Quantification and analysis: Measurements and statistics Analysis spreadsheet and charting Material properties computation, based on 3D images: Absolute permeability Thermal conductivity Molecular diffusivity Electrical resistivity/formation factor 3D image-based meshing for CFD and FEA: From 3D imaging modalities (CT, micro-CT, MRI, etc.) Surface and volume meshes generation Export to FEA and CFD solvers for simulation Post-processing for simulation analysis Presentation, automation: MovieMaker, Multiscreen, Video wall, collaboration, and VR support TCL Scripting, C++ extension API Avizo is based on Open Inventor 3D graphics toolkits (FEI Visualization Sciences Group). External links Scientific Publications Official Avizo forum Avizo videos References 3D graphics software 3D imaging Computational fluid dynamics Computer vision software Data visualization software Earth sciences graphics software Graphics software Image processing software Image segmentation Mesh generators Molecular dynamics software Molecular modelling software Nondestructive testing Physics software Science software Simulation software Software that uses Qt Virtual reality
284006
https://en.wikipedia.org/wiki/Tata%20Consultancy%20Services
Tata Consultancy Services
Tata Consultancy Services (TCS) is an Indian multinational information technology (IT) services and consulting company headquartered in Mumbai, Maharashtra, India with its largest campus located in Chennai, Tamil Nadu, India. As of February 2021, TCS is the largest IT services company in the world by market capitalisation ($200 billion). It is a subsidiary of the Tata Group and operates in 149 locations across 46 countries. TCS is the second largest Indian company by market capitalisation and is among the most valuable IT services brands worldwide. In 2015, TCS was ranked 64th overall in the Forbes World's Most Innovative Companies ranking, making it both the highest-ranked IT services company and the top Indian company. As of 2018, it is ranked eleventh on the Fortune India 500 list. In April 2018, TCS became the first Indian IT company to reach $100 billion in market capitalisation and second Indian company ever (after Reliance Industries achieved it in 2007) after its market capitalisation stood at on the Bombay Stock Exchange. In 20162017, parent company Tata Sons owned 72.05% of TCS and more than 70% of Tata Sons' dividends were generated by TCS. In March 2018, Tata Sons sold stocks of TCS worth $1.25 billion in a bulk deal. As of 15 September 2021, TCS has recorded a market capitalisation of US$200 billion, making it the first Indian IT firm to do so. History 1968–2005 Tata Consultancy Services Limited, initially started as "Tata Computer Systems" was founded in 1968 by division of Tata Sons Limited. Its early contracts included punched card services to sister company TISCO (now Tata Steel), working on an Inter-Branch Reconciliation System for the Central Bank of India, and providing bureau services to Unit Trust of India. In 1975, TCS delivered an electronic depository and trading system called SECOM for Swiss company ; it also developed System X for the Canadian Depository System and automated the Johannesburg Stock Exchange. TCS associated with a Swiss partner, TKS Teknosoft, which it later acquired. In 1980, TCS established India's first dedicated software research and development centre, the Tata Research Development and Design Centre (TRDDC) in Pune. In 1981, it established India's first client-dedicated offshore development centre, set up for clients Tandem. TCS later (1993) partnered with Canada-based software factory Integrity Software Corp, which TCS later acquired. In anticipation of the Y2K bug and the launch of a unified European currency (Euro), Tata Consultancy Services created the factory model for Y2K conversion and developed software tools which automated the conversion process and enabled third-party developer and client implementation. Towards the end of 1999, TCS decided to offer Decision Support System (DSS) in the domestic market under its Corporate Vice President and Transformation Head Subbu Iyer. 2005 to 2021 On 25 August 2004, TCS became a publicly listed company. In 2005, TCS became the first India based IT services company to enter the bioinformatics market. In 2006, it designed an ERP system for the Indian Railway Catering and Tourism Corporation. By 2008, its e-business activities were generating over US$500 million in annual revenues. TCS entered the small and medium enterprises market for the first time in 2011, with cloud-based offerings. On the last trading day of 2011, it overtook RIL to achieve the highest market capitalisation of any India-based company. In the 2011–12 fiscal year, TCS achieved annual revenues of over US$10 billion for the first time. In May 2013, TCS was awarded a six-year contract worth over to provide services to the Indian Department of Posts. In 2013, the firm moved from the 13th position to 10th position in the League of top 10 global IT services companies and in July 2014, it became the first Indian company with over market capitalisation. In Jan 2015, TCS ends RIL's 23-year run as India's most profitable firm. In Jan 2017, the company announced a partnership with Aurus, Inc., a payments technology company, to deliver payment solutions for retailers using TCS OmniStore, a first of its kind unified store commerce platform. In the same year, TCS China was associated as a joint venture with the Chinese government. TCS announced its FY19 Q3 results posting 24 per cent year-on-year (YoY) rise in profit at . The stock plunged 2.5 per global ippo/illa/ega oh ippo/illa/ega zero cent intra-day as brokerages cut price target. TCS received the 2019 American Business Awards from Four Stevies. On 8 October 2020, TCS surpassed Accenture in market capitalisation to become the world's most-valuable IT company with a market cap of $144.73 billion. On 25 January 2021, TCS again surpassed Accenture briefly, in market capitalisation to become the world's most-valuable IT company with a market cap of $170 billion. The same day, TCS became India's most valuable company, surpassing Reliance Industries with a market cap of . In 2021 Tata is also one of the largest job provider in India hiring 43,000 individuals in H1 FY22. Acquisitions Operations As of March 2021, TCS had a total of 50 subsidiary companies and operated in 46 countries. Locations TCS has operations in the following locations: India: Ahmedabad, Bengaluru, Baroda, Bhubaneswar, Chennai, Coimbatore, Delhi, Gandhinagar, Goa, Gurugram, Guwahati, Hyderabad, Bhopal, Indore, Jamshedpur, Kalyan,Thane,Kochi, Kolkata, Lucknow, New Delhi, Mumbai, Nagpur, Noida, Nashik, Patna, Pune, Thiruvananthapuram, and Varanasi. Asia (excluding India): Bahrain, China, Israel, UAE, Hong Kong, Indonesia, Japan, Malaysia, the Philippines, Saudi Arabia, Singapore, South Korea, Taiwan, Thailand, and Qatar. Oceania: Australia Africa: South Africa, Morocco (closed) Europe: Belgium, Bulgaria, Denmark, Finland, France, Germany, Hungary, Iceland, Republic of Ireland, Italy, Luxembourg, the Netherlands, Norway, Portugal, Spain, Sweden, Switzerland, and the United Kingdom. North America: Canada, Mexico, and the United States. South America: Argentina, Brazil, Chile, Colombia, Ecuador, Peru, and Uruguay. TCS BPS TCS BPS (Business Process Services) is the third-largest India-based IT outsourcing company (after Capgemini). The BPS division had revenues of US$1.44 billion in FY 201213, which was 12.5% of the total revenue of TCS. TCS BPS has more than 45,000 employees who serve over 225 million customers across 11 countries. The rate of attrition in the BPS division during the financial year 201213 was 19.5%. In 2006, TCS won a $35 million contract from Eli Lilly and Company for providing clinical data management, biostatistics, and medical writing services. In 2007, it won a major multi-year deal from Swiss pharmaceutical major Hoffmann-La Roche to provide data management, biostatistics, clinical programming, and drug safety support to Hoffmann-La Roche's global pharmaceutical development efforts. The firm has also opened a business process outsourcing facility in the Philippines. Tata Research Development and Design Centre TCS established India's first software research centre, the Tata Research Development and Design Centre, in Pune, in 1981. TRDDC undertakes research in software engineering, process engineering, and systems research. Researchers at TRDDC also developed MasterCraft (now a suite of digitisation and optimisation tools), a Model-driven Development software that can automatically create codes based on a model of a software, and rewrite the codes based on the users' needs. The research at TRDDC has also resulted in the development of Sujal, a low-cost water purifier, that can be manufactured using locally available resources. TCS deployed thousands of these filters in the Indian Ocean tsunami disaster of 2004 as part of its relief activities. This product has been marketed in India as Tata Swach, a low-cost water purifier. Innovation labs In 2007, TCS launched its co-innovation network, a network of innovation labs, start-up alliances, university research departments, and venture capitalists. In addition, TCS has 19 innovation labs based in three countries. TCS' partners include Collabnet and Cassatt, and academic institutions such as IITs, Stanford, MIT, and Carnegie Mellon, and venture capitalists like Sequoia and Kleiner Perkins. Employees TCS is one of the largest private-sector employers in India, and the fourth-largest employer among listed Indian companies (after Indian Railways, Indian Army, and India Post). TCS has crossed more than 500,000 employees as of 8 July 2021. The number of non-Indian nationals was 21,282 as of 31 March 2013 (7.7%). The employee costs for the FY 201213 were US$4.38 billion, which was approx. 38% of the total revenue of the company for that period. In the fiscal year 201213, TCS recruited a total of 69,728 new staff, of whom 59,276 were based in India and 10,452 were based in the rest of the world. In the same period, the rate of attrition was 10.6%. The average age of a TCS employee is 28 years. The employee utilisation rate, excluding trainees, for the FY 201213 was 82%. TCS was the fifth-largest United States visa recipient in 2008 (after Infosys, CTS, Wipro, and Mahindra Satyam). In 2012, the Tata Group companies, including TCS, were the second-largest recipient of H-1B visas. As of Jan 2020, TCS has over 4,00,000+ employees. It is the world's third-largest IT employer behind IBM and HP. Subramaniam Ramadorai, the former CEO of TCS, has authored an autobiographical book about his experiences in the company called The TCS Story...and Beyond. Financials Sponsorships TCS is the title sponsor for Toronto Marathon, London Marathon, Amsterdam Marathon, Mumbai Marathon, and New York City Marathon and one of the sponsors of City2Surf, Australian Running Festival, Berlin Marathon, Chicago Marathon, and Boston Marathon. In India, it is the title sponsor of World 10K held in Bangalore every year. TCS is a sponsor of the Indian Premier League team Rajasthan Royals since 2009. Besides, TCS provides Rajasthan Royals with technology to help in the analysis of player performance, simulation, and use of RFID tags for tracking the players’ fitness levels and for security purposes in the stadiums. TCS sponsors an annual IT quiz for high school students called TCS IT Wiz. The TCS IT Wiz is the largest quiz in India, attracting students who are studying between class 8 and class 12. This quiz is hosted by Giri Balasubramanium. TCS also currently sponsors Jaguar Racing in Formula E. Controversies Class action lawsuit On 14 February 2006, US law firm Lieff Cabraser filed a nationwide class action lawsuit against Tata. In July 2013, judge Claudia Wilken of the US District Court, Northern District of California in Oakland, California, granted final approval to the settlement of the lawsuit on behalf of all non-US citizens employed by TCS within the state of California from 14 February 2002 to 30 June 2005. The workers claimed that they were forced to sign over their federal and state tax refunds to their employer, as well as stating their Indian salaries were wrongfully deducted from their US pay. On 22 February 2013, the company agreed to settle for a sum of , this class-action suit filed in a United States court on payment to employees on deputation. A US grand jury has slapped two companies of India's Tata GroupTata Consultancy Services and Tata America International Corpwith a US$940 million fine in a trade secret lawsuit filed against them. Epic Systems had accused TCS and Tata America International Corp, in a lawsuit filed in October 2014 in US District Court in Madison, which was amended in January and December 2015, of "brazenly stealing the trade secrets, confidential information, documents and data" belonging to Epic. This Epic Systems lawsuit against TCS was closed and settled with an undisclosed out-of-court agreement that was completed in January 2018. A federal class-action lawsuit accusing Tata Consultancy Services Ltd. of bias against US-born workers was heard by the court during a jury trial in November 2018. The US federal court Case in Northern District of California (Buchanan v. Tata Consultancy Servs., Ltd., 4:15-cv-01696) is one of seven asserting that several large offshore and national IT staffing companies with operations in the US prefer foreign workers from South Asia over qualified Americans. All the companies being sued are heavy users of H-1B guestworker visas, which go to skilled professionals in "specialty occupations". The US Federal District Court Case 15-cv-01696-YGR consolidated these lawsuits against TCS and several other IT staffing companies into a class action trial and was allowed by the Federal District Court to proceed (but not yet certified) with a November 2018 jury trial. Accusations of discrimination In May 2013, the Canadian Broadcasting Corporation (CBC), in its extensive coverage of the hiring of temporary foreign workers in Canada and the unemployment issues faced by Canadians, reported that TCS rarely hires skilled experienced Canadians at the Toronto offices while advertising open positions in Canada. TCS responded that the company hired more than 125 Canadian workers in 2013 who make less than 1.2% of the 10,452 workers the company has outside of India. In April 2015, a class-action lawsuit against TCS was filed in a San Francisco federal court by a US information technology worker and ex-employee, who accused the company of discriminating against American workers by favouring South Asians in hiring and promotion. The lawsuit claimed that South Asians comprise 95% of the company's 14,000-person US workforce and that TCS engaged in discriminatory practices by sourcing most of its workforce through the H-1B visa programme, by focusing its US-based hiring disproportionately on South Asians, and by favouring South Asian employees in its human resources practices. In response, TCS refuted the plaintiff's claims, assuring that it is an equal opportunity employer and bases its employment practices on non-discriminatory reasons. A spokesperson said that in 2014 alone the company had recruited over 2,600 US hires. That same month, electric utility Southern California Edison (SCE) laid off about 400 IT employees, with an additional 100 IT workers leaving voluntarily. Meanwhile, the utility company hired immigrants from TCS and competing company Infosys. In October 2015, the United States Department of Labor concluded Infosys did not abuse the H1-B programme in the dealings with SCE, while TCS's case remained open. See also List of IT consulting firms List of public listed software companies of India Software industry in Telangana Information technology in India List of Indian IT companies References External links BSE SENSEX NIFTY 50 Companies based in Mumbai Technology companies established in 1968 Information technology consulting firms of India International information technology consulting firms MES software Outsourcing companies Software companies of India Information technology companies of Bhubaneswar Information technology companies of India 1968 establishments in Maharashtra Consulting firms established in 1968 Software companies based in Mumbai Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
52095526
https://en.wikipedia.org/wiki/Vibrante
Vibrante
Vibrante is the name of a Linux distribution created by Nvidia and used for at least their Drive PX 2 platform series. The name is listed as a registered trademark of NVIDIA. First appearances of the name were seen in about the year 2010 when it labeled some rather universal multimedia engine including audio, video and 3D building display that was in tight cooperation with Audi company. At NVidia TechDay in December 2015 the distribution was reported with version numbers 3.0 for Jetson TK1 Pro (aka. X3) and Drive CX, and with version 4.0 for Drive CX and PX platforms. Jetson TK1 is mentioned as running with the Linux4Tegra package instead. Companies like Toradex have built and published e.g. sample application codes on top of it. Abbreviations of Vibrante Linux like V3L, V3Le or V4L with the number representing the version plus terms like L4T (Linux4Tegra, a supposed sibling) and assigned to certain devices can be found in some history and release docs, e.g. for NVIDIA VisionWorks. On top of Vibrante it is possible to run NVIDIAs VisionWorks Toolkit. Vibrante is one of the targets that OpenCV4Tegra (OpenCV for Tegra; an OpenCV derivate with Tegra-specific optimizations) can run upon. Further there is the NVIDIA PerfKit Package that copes with Vibrante. Spin Offs It seems that Vibrante Linux was used to derive a QNX based version, thus probably using the QNX kernel as the base system along with several higher level software components (e.g. from GNU and other sources) as already present from the Vibrante Linux environment. Namely the package qnx-V3Q-23.16.01 is reported to run on the Nvidia Jetson TK1 Pro board. Usage At CES 2010 it was announced that NVIDIA would be introducing its GPUs to all 2010 Audi vehicles, to power its third generation MMI. 3G MMI used Vibrante as its multimedia engine to deliver in car entertainment. References Embedded Linux Nvidia software
40505955
https://en.wikipedia.org/wiki/Viral%20phenomenon
Viral phenomenon
Viral phenomena are objects or patterns that are able to replicate themselves or convert other objects into copies of themselves when these objects are exposed to them. Analogous to the way in which viruses propagate, the term viral pertains to a video, image, or written content spreading to numerous online users within a short time period. This concept has become a common way to describe how thoughts, information, and trends move into and through a human population. The popularity of viral media has been fueled by the rapid rise of social network sites, wherein audiences—who are metaphorically described as experiencing "infection" and "contamination"—play as passive carriers rather than an active role to 'spread' content, making such content "go viral". The term viral media differs from spreadable media as the latter refers to the potential of content to become viral. Memes are one known example of informational viral patterns. History Terminology Meme The word meme was coined by Richard Dawkins in his 1976 book The Selfish Gene as an attempt to explain memetics; or, how ideas replicate, mutate, and evolve. When asked to assess this comparison, Lauren Ancel Meyers, a biology professor at the University of Texas, stated that "memes spread through online social networks similarly to the way diseases do through offline populations." This dispersion of cultural movements is shown through the spread of memes online, especially when seemingly innocuous or trivial trends spread and die in rapid fashion. Viral The term viral pertains to a video, image, or written content spreading to numerous online users within a short time period. If something goes viral, many people discuss it. Accordingly, Tony D. Sampson defines viral phenomena as spreadable accumulations of events, objects, and affects that are overall content built up by popular discourses surrounding network culture. There is also a relationship to the biological notion of disease spread and epidemiology. In this context, "going viral" is similar to an epidemic spread, which occurs if more than one person is infected by a disease for every person infected. Thus, if a piece of content is shared with more than one person every time it is seen, then this will result in viral growth. In Understanding Media (1964), philosopher Marshall McLuhan describes photography in particular, and technology in general, as having a potentially "virulent nature." In Jean Baudrillard's 1981 treatise Simulacra and Simulation, the philosopher describes An American Family, arguably the first "reality" television series, as a marker of a new age in which the medium of television has a "viral, endemic, chronic, alarming presence." Another formulation of the 'viral' concept includes the term media virus, or viral media, coined by Douglas Rushkoff, who defines it as a type of Trojan horse: "People are duped into passing a hidden agenda while circulating compelling content." Mosotho South-African media theorist Thomas Mofolo uses Rushkoff's idea to define viral as a type of virtual collective consciousness that primarily manifests via digital media networks and evolves into offline actions to produce a new social reality. Mofolo bases this definition on a study about how internet users involved in the Tunisian Arab Spring perceived the value of Facebook towards their revolution. Mofolo's understanding of the viral was first developed in a study on Global Citizen's #TogetherAtHome campaign and used to formulate a new theoretical framework called Hivemind Impact. Hivemind impact is a specific type of virality that is simulated via digital media networks with the goal of harnessing the virtual collective consciousness to take action on a social issue. For Mofolo, the viral eventually evolves into McLuhan's 'global village' when the virtual collective consciousness reaches a point of noogenesis that then becomes the noosphere. Content sharing Early history Before writing and while most people were illiterate, the dominant means of spreading memes was oral culture like folk tales, folk songs, and oral poetry, which mutated over time as each retelling presented an opportunity for change. The printing press provided an easy way to copy written texts instead of handwritten manuscripts. In particular, pamphlets could be published in only a day or two, unlike books which took longer. For example, Martin Luther's Ninety-five Theses took only two months to spread throughout Europe. A study of United States newspapers in the 1800s found human-interest, "news you can use" stories and list-focused articles circulated nationally as local papers mailed copies to each other and selected content for reprinting. Chain letters spread by postal mail throughout the 1900s. Urban legends also began as word-of-mouth memes. Like hoaxes, they are examples of falsehoods that people swallow, and, like them, often achieve broad public notoriety. CompuServe Beyond vocal sharing, the 20th century made huge strides in the World Wide Web and the ability to content share. In 1979, dial-up internet service provided by the company CompuServ was a key player in online communications and how information began spreading beyond the print. Those with access to a computer in the earliest of stages could not comprehend the full effect that public access to the internet could or would create. It is hard to remember the times of newspapers being delivered to households across the country in order to receive their news for the day, and it was when The Columbus Dispatch out of Columbus, Ohio broke barriers when it was first to publish in online format. The success that was predicted by CompuServe and the Associated Press led to some of the largest newspapers to become part of the movement to publish the news via online format. Content sharing in the journalism world brings new advances to viral aspects of how news is spread in a matter of seconds. Internet memes The creation of the Internet enabled users to select and share content with each other electronically, providing new, faster, and more decentralized controlled channels for spreading memes. Email forwards are essentially text memes, often including jokes, hoaxes, email scams, written versions of urban legends, political messages, and digital chain letters; if widely forwarded they might be called 'viral emails'. User-friendly consumer photo editing tools like Photoshop and image-editing websites have facilitated the creation of the genre of the image macro, where a popular image is overlaid with different humorous text phrases. These memes are typically created with Impact font. The growth of video-sharing websites like YouTube made viral videos possible. It is sometimes difficult to predict which images and videos will "go viral"; sometimes the creation of a new Internet celebrity is a sudden surprise. One of the first documented viral videos is "Numa Numa", a webcam video of then-19-year-old Gary Brolsma lip-syncing and dancing to the Romanian pop song "Dragostea Din Tei". The sharing of text, images, videos, or links to this content have been greatly facilitated by social media such as Facebook and Twitter. Other mimicry memes carried by Internet media include hashtags, language variations like intentional misspellings, and fads like planking. The popularity and widespread distribution of Internet memes have gotten the attention of advertisers, creating the field of viral marketing. A person, group, or company desiring much fast, cheap publicity might create a hashtag, image, or video designed to go viral; many such attempts are unsuccessful, but the few posts that "go viral" generate much publicity. Types of viral phenomena Viral videos Viral videos are among the most common type of viral phenomena. A viral video is any clip of animation or film that is spread rapidly through online sharing. Viral videos can receive millions of views as they are shared on social media sites, reposted to blogs, sent in emails and so on. When a video goes viral it has become very popular. Its exposure on the Internet grows exponentially as more and more people discover it and share it with others. An article or an image can also become viral. The classification is probably assigned more as a result of intensive activity and the rate of growth among users in a relatively short amount of time than of simply how many hits something receives. Most viral videos contain humor and fall into broad categories: Unintentional: Videos that the creators never intended to go viral. These videos may have been posted by the creator or shared with friends, who then spread the content. Humorous: Videos that have been created specifically to entertain people. If a video is funny enough, it will spread. Promotional: Videos that are designed to go viral with a marketing message to raise brand awareness. Promotional viral videos fall under viral marketing practices. For instance, one of the newest viral commercial video – Extra Gum commercial. Charity: Videos created and spread in order to collect donations. For instance, Ice Bucket challenge was a hit on social networks in the summer of 2014. Art performances: a video created by artists to raise the problem, express ideas and the freedom of creativity. Political: Viral videos are powerful tools for politicians to boost their popularity. Barack Obama campaign launched Yes We Can slogan as a viral video on YouTube. "The Obama campaign posted almost 800 videos on YouTube, and the McCain campaign posted just over 100. The pro-Obama video "Yes we can" went viral after being uploaded to YouTube in February 2008." Other political viral videos served not as a promotion but as an agent for support and unification. Social media was actively employed in the Arab Spring. "The Tunisian uprising had special resonance in Egypt because it was prompted by incidents of police corruption and viral social media condemnation of them." YouTube effect With the creation of YouTube, a video-sharing website, there has been a huge surge in the number of viral videos on the Internet. This is primarily due to the ease of access to these videos and the ease of sharing them via social media websites. The ability to share videos from one person to another with ease means there are many cases of 'overnight' viral videos. "YouTube, which makes it easy to embed its content elsewhere have the freedom and mobility once ascribed to papyrus, enabling their rapid circulation across a range of social networks." YouTube has overtaken television in terms of the size of audience. As one example, American Idol was the most viewable TV show in 2009 in the U.S. while "a video of Scottish woman Susan Boyle auditioning for Britain's Got Talent with her singing was viewed more than 77 million times on YouTube". The capacity to attract an enormous audience on a user-friendly platform is one of the leading factors why YouTube generates viral videos. YouTube contributes to viral phenomenon spreadability since the idea of the platform is based on sharing and contribution. "Sites such as YouTube, eBay, Facebook, Flickr, Craigslist, and Wikipedia, only exist and have value because people use and contribute to them, and they are clearly better the more people are using and contributing to them. This is the essence of Web 2.0." An example of one of the most prolific viral YouTube videos that fall into the promotional viral videos category is Kony 2012. On March 5, 2012, the charity organization Invisible Children Inc. posted a short film about the atrocities committed in Uganda by Joseph Kony and his rebel army. Artists use YouTube as their one of the main branding and communication platform to spread videos and make them viral. For instance, after her time off, Adele released her most-viewed song "Hello". "Hello" crossed 100 million views in just five days, making it the fastest video to reach it in 2015. YouTube viral videos make stars. As an example, Justin Bieber who was discovered since his video on YouTube Chris Brown's song "With You" went viral. Since its launch in 2005, YouTube has become a hub for aspiring singers and musicians. Talent managers look to it to find budding pop stars. According to Visible Measures, the original "Kony 2012" video documentary, and the hundreds of excerpts and responses uploaded by audiences across the Web, collectively garnered 100 million views in a record six days. This example of how quickly the video spread emphasizes how YouTube acts as a catalyst in the spread of viral media. YouTube is considered as "multiple existing forms of participatory culture" and that trend is useful for the sake of business. "The discourse of Web 2.0 its power has been its erasure of this larger history of participatory practices, with companies acting as if they were "bestowing" agency onto audiences, making their creative output meaningful by valuing it within the logics of commodity culture." Viral marketing Viral marketing is the phenomenon in which people actively assess media or content and decide to spread to others such as making a word-of-mouth recommendation, passing content through social media, posting a video to YouTube. The term was first popularized in 1995, after Hotmail spreading their service offer "Get your free web-base email at Hotmail." Viral marketing has become important in the business field in building brand recognition, with companies trying to get their customers and other audiences involved in circulating and sharing their content on social media both in voluntary and involuntary ways. Many brands undertake guerrilla marketing or buzz marketing to gain public attention. Some marketing campaigns seek to engage an audience to unwittingly pass along their campaign message. The use of viral marketing is shifting from the concept that the content drives its own attention to the intended attempt to draw the attention. The companies are worried about making their content 'go viral' and how their customers' communication has the potential to circulate it widely. There has been much discussion about morality in doing viral marketing. Iain Short (2010) points out that many applications on Twitter and Facebook generates automated marketing message and update it on the audience's personal timelines without users personally pass it along. Stacy Wood from North Carolina State University has conducted research and found that the value of recommendations from everyday people has a potential impact on the brands. Consumers have been bombarded by thousands of messages every day which makes authenticity and credibility of marketing message been questioned; word of mouth from 'everyday people' therefore becomes an incredibly important source of credible information. If a company sees that the word-of-mouth from "the average person" is crucial for the greater opportunity for influencing others, many questions remain. "What implicit contracts exist between brands and those recommenders? What moral codes and guidelines should brands respect when encouraging, soliciting, or reacting to comments from those audiences they wish to reach? What types of compensation, if any, do audience members deserve for their promotional labor when they provide a testimonial." An example of effective viral marketing can be the unprecedented boost in sales of the Popeyes chicken sandwich. After the Twitter account for Chick-fil-A attempted to undercut Popeyes by suggesting that Popeyes' chicken sandwich wasn't the "original chicken sandwich", Popeyes responded with a tweet that would end up going viral. After the response had amassed 85,000 retweets and 300,000 likes, Popeyes chains began to sell many more sandwiches to the point where many locations sold all of their stock of chicken sandwiches. This prompted other chicken chains to tweet about their chicken sandwiches, but none of these efforts became as widespread as it was for Popeyes. Financial contagion In macroeconomics, "financial contagion" is a proposed socially-viral phenomenon wherein disturbances quickly spread across global financial markets. Evaluation by commentators Some social commentators have a negative view of "viral" content, though others are neutral or celebrate the democratization of content as compared to the gatekeepers of older media. According to the authors of Spreadable Media: Creating Value and Meaning in a Networked Culture: "Ideas are transmitted, often without critical assessment, across a broad array of minds and this uncoordinated flow of information is associated with "bad ideas" or "ruinous fads and foolish fashions." Science fiction sometimes discusses 'viral' content "describing (generally bad) ideas that spread like germs." For example, the 1992 novel Snow Crash explores the implications of an ancient memetic meta-virus and its modern-day computer virus equivalent: The spread of viral phenomena is also regarded as part of the cultural politics of network culture or the virality of the age of networks. Network culture enables the audience to create and spread viral content. "Audiences play an active role in 'spreading' content rather than serving as passive carriers of viral media: their choices, investments, agendas, and actions determine what gets valued." Various authors have pointed to the intensification in connectivity brought about by network technologies as a possible trigger for increased chances of infection from wide-ranging social, cultural, political, and economic contagions. For example, the social scientist Jan van Dijk warns of new vulnerabilities that arise when network society encounters "too much connectivity." The proliferation of global transport networks makes this model of society susceptible to the spreading of biological diseases. Digital networks become volatile under the destructive potential of computer viruses and worms. Enhanced by the rapidity and extensiveness of technological networks, the spread of social conformity, political rumor, fads, fashions, gossip, and hype threatens to destabilize established political order. Links between viral phenomena that spread on digital networks and the early sociological theories of Gabriel Tarde have been made in digital media theory by Tony D Sampson (2012; 2016). In this context, Tarde's social imitation thesis is used to argue against the biological deterministic theories of cultural contagion forwarded in memetics. In its place, Sampson proposes a Tarde-inspired somnambulist media theory of the viral. See also Viral video Viral marketing Viral license List of Internet phenomena Meme Fad Transitive relation Virality (thesis and book) Computer virus Network science Compartmental models in epidemiology References Further reading Internet terminology Social phenomena Information society Internet culture
12817977
https://en.wikipedia.org/wiki/Street%20Hacker
Street Hacker
Street Hacker is a hacking simulator game developed by VirtuWeb Interactive. Unlike other hacking simulation games, the in-game engine is made to appear more like an actual computer system similar to the Linux OS/Shell. Plot In Street Hacker, the player assumes the role of a hacker who is approached by a crude executive and entrepreneur, Demetrius Mordecai. He sees you as a person with a strong analytical mind, keen intuition and a desire for power. Having no money at your disposal, he hires you as a hacker to do his dirty work. Vince, his chief of operations, helps you get started and accustomed to the game. Being as successful as he is, Demetrius has made quite a few enemies in the corporate world. This is evident by the first few missions he assigns you to, missions in which you are required to disrupt these corporations by sabotaging their servers. Once these missions are completed, Demetrius decides to "retire" you by calling in an anonymous tip to the FBI about who was behind the attacks. Quickly responding, the FBI surrounds your apartment and finds enough evidence to put you away for 4 years. Two years later, you are freed from prison to serve your remaining two years on parole. Vince, your mentor from the beginning, helps you get back on your feet by hooking you up with a laptop and some cash. He and many of the others who still work for Demetrius are all hoping that you can find enough dirt on him to put him away for life. Too afraid to directly betray him, they all put their support behind you. Style Street Hacker focuses primarily on actual forms of hacking (viruses, worms, etc.). The user is given a program with which to order these different programs so that they can perform the various missions assigned to them. Another feature to the game is the infiltration feature, where the user is required to infiltrate a building in order to hack internet access so they can accomplish the different missions that are assigned to them. Similarly to games such as Uplink, Street Hacker is heavily mission-driven, the entire main storyline being propelled by the different missions assigned. For the most part, the in-game dynamics mimic that of an actual operating system (down to the ability to execute programs and view folders). Hacking takes place in a DOS-style command prompt, using commands familiar to computer aficionado (commands such as WHOIS, CONNECT, etc.). It also allows for downloading files from the respective servers which you must connect to for missions (and in some cases for software downloads). See also Hacking Uplink System 15000 Hacker Neuromancer References External links Street Hacker Official Website 2004 video games Windows games Windows-only games Hacking video games
102677
https://en.wikipedia.org/wiki/Dassault%20Syst%C3%A8mes
Dassault Systèmes
Dassault Systèmes SE () (abbreviated 3DS) is a French software corporation. It is among Fortune 50 list of the largest software companies and develops software for 3D product design, simulation, manufacturing and more. A Dassault Group subsidiary spun off from Dassault Aviation in 1981, it is headquartered in Vélizy-Villacoublay, France, and has around 20,000 employees in 140 different countries. A pioneer in computer aided design with CATIA, 3DS has expanded into a diverse range of software categories and industries. Established companies and startups use the company's 3DEXPERIENCE platform to create, test and optimize consumer and business-to-business products and services. History 3DS identifies five distinct periods in its history: 1981-1989 – 3D Design: Led by the launch of CATIA, parts and assemblies can be designed in 3D. 1989-1999 – 3D Digital Mock-Up (3D DMU): 3D parts can be assembled to simulate a complete virtual mock-up, enabling the replacement of physical prototypes. When Boeing adopted 3D DMU for its 777 project, it marked the first time that an entire aircraft was created in 3D, without a physical mock-up. 1999-2012 – 3D Product Lifecycle Management (3D PLM): 3D modeling of the entire lifecycle of a product's development – from design to retirement, enabling improved process management and product innovation. 2012-2020 – 3DEXPERIENCE 2020–present – 3DEXPERIENCE from Things to Life: With virtual twins for people, the human body can be modeled and tested as precisely, safely, and effectively as products in traditional industries such as cars, buildings, or airplanes. Key dates 1981: Dassault Systèmes created; strategic partnership with IBM launched 1982: CATIA Version 1 (V1) introduced 1986: Version 3 (V3) architecture introduced (for 3D Design) 1994: Version 4 (V4) architecture introduced (for full Digital Mock-Up/”DMU”) 1997: SOLIDWORKS acquired 1998: ENOVIA brand launched (for product data management) 1999: Version 5 (V5) architecture introduced (for both Windows NT and UNIX environments) 1999: Product Lifecycle Management (PLM) introduced (for 3D design, simulation analysis, digital manufacturing and product data management) 2000: DELMIA brand launched (for digital manufacturing) 2005: Abaqus acquired (realistic simulation software); relaunched as SIMULIA brand 2005: Virtools acquired 2006: MatrixOne acquired; integrated into the ENOVIA brand 2007: 3DVIA brand launched (extending 3D visualization to non-CAD users) 2008 Version 6 (V6) architecture introduced (laying the foundation for the 3DEXPERIENCE platform) 2010: EXALEAD acquired (for structured and unstructured search with data analytics) 2010: IBM's sales and support force acquired (all support contracts and around 700 employees) for the PLM (Product Lifecycle Management) software business 2012: NETVIBES acquired (for intelligent dashboarding and social media monitoring) 2012: Gemcom acquired (for geologic modeling and mining engineering); relaunched as GEOVIA brand 2012: 3DEXPERIENCE platform introduced 2013: Simpoe S.A.S. acquired (Polymer Injection and Extrusion Simulation Software); integrated into SOLIDWORKS Plastics and 3D EXPERIENCE Injection Molding Engineering. 2013: Apriso acquired (Manufacturing Operations Management software); integrated into DELMIA brand. 2014: Realtime Technology acquired (3D visualization software for marketing); relaunched as 3DEXCITE brand 2014: Accelrys acquired (biological, chemical and materials modeling and simulation software); relaunched as BIOVIA brand 2014: Quintiq acquired (operations planning and optimization software); integrated into DELMIA brand. 2015: Dassault Systèmes changes legal status to European Company (societas Europaea) 2016: CST acquired (electromagnetic simulation software); integrated into SIMULIA brand 2016: Outscale acquired majority stake (cloud computing infrastructure services) 2018: Acquired majority stake in Centric Software(fashion and retail software) 2018: IQMS acquired (mid-market ERP software); relaunched as DELMIA.WORKS 2019: Medidata Solutions acquired (clinical trial software) 2020: NuoDB, a developer of a cloud-native distributed SQL database management system, was acquired. Virtools Dassault Systèmes also owned Virtools, a software developer and vendor created in 1993. It is no longer updated and has been taken down in March 2009. They offered a development environment to create 3D real-time applications and related services, targeted at system integrators, game studios and corporate end-users. In 2005, they were acquired by Dassault Systèmes. Since 2006, the software is called 3DVIA Virtools as part of Dassault Systèmes' 3DVIA brand. The last release was Virtools 5 (5.9.9.15). Products and brands Before 2000, the company focused on 3D design software with CATIA and SolidWorks. Over the next decade, new brands were launched: ENOVIA for product data management and collaboration, DELMIA for manufacturing and global operations, and Simulia for realistic simulation. The company later added EXALEAD (for information intelligence and search), NETVIBES (for business analytics), 3DEXCITE (for marketing), GEOVIA (for modeling the planet), BIOVIA (for modeling the biosphere), and MEDIDATA (for managing clinical trials)*. 3DEXPERIENCE platform In 2012, 3DS launched the 3DEXPERIENCE platform. Users navigate the 3DEXPERIENCE platform via its interface called the Compass, which groups these brands or apps into four quadrants: 3D Modeling apps: create and visualize products and experiences Simulation apps: simulate and test the possible business solutions in the virtual world Information Intelligence apps: gather and gain insight from data to optimize product development Social & Collaborative apps: manage data and collaborate with stakeholders in and outside the company The 3DEXPERIENCE platform is available on premise and on cloud. It functions as both a system of operations (to run a business) and a business model (to transform a business). As a system of operations, it enables businesses to design and test experiences from ideation, modeling and simulation, delivery and usage. As a business model, it allows businesses to become “platform companies” by connecting stakeholders and removing intermediaries. 3DEXPERIENCE Marketplace Part of the 3DEXPERIENCE platform, the 3DEXPERIENCE Marketplace is an online trading environment for 3D digital design, engineering and manufacturing transactions. Users can identify new manufacturing service providers (including 3D printing), search for and add specific components within their design environment, or manage their ecosystem of selected service providers and in-house resources. They can access on-demand manufacturing and part sourcing services. As of 2019, the 3DEXPERIENCE Marketplace features 50 digital manufacturers with more than 500 machines and 30 million component parts from 600 suppliers. Additional Products The company has additional products and brands including Centric PLM (for fashion and retail), 3DVIA/HomeByMe (for 3D home planning), and DraftSight (for 2D Drafting). People and culture 3DS has 20,000 employees across 140 countries: 39% are based in Europe; 30% in Asia-Oceania; and 31% in Americas. Geographic headquarters are located in Paris (Vélizy-Villacoublay, France), Tokyo, Japan and Boston (Waltham, Massachusetts). As of December 31, 2019, approximately 39% of employees worked in R&D. In recent year, 3DS has been named Forbes’ World’s Most Innovative Companies, Corporate Knights’ World's Most Sustainable Corporations, Forbes' World's Best Employers and Fortune’s Future 50 companies with the strongest long-term growth potential. Corporate information Holdings The list below sets forth the company's main subsidiaries and also indicates the percentage equity interest and voting rights directly or indirectly held by Dassault Systèmes SA. Dassault Data Services SAS (France) – 95% Dassault Systèmes Americas Corp. (US) – 100% Dassault Systèmes Services LLC (US) – 100% Dassault Systèmes Deutschland GmbH (Germany) – 100% Dassault Systèmes SolidWorks Corp. (US) – 100% Dassault Systèmes K.K. (Japan) – 100% Dassault Systèmes Enovia Corp. (US) – 100% SolidWorks Japan K.K. (Japan) – 100% Dassault Systèmes Delmia Corp. (US) – 100% Dassault Systèmes Korea Corp. (Korea) – 100% Dassault Systèmes Simulia Corp. (US) – 100% Company management Chairman of the Board of Directors: Charles Edelstenne Vice Chairman & Chief Executive Officer: Bernard Charlès Chief Operating Officer: Pascal Daloz Chief Financial Officer: Rouven Bergmann President, Research & Development: Dominique Florack General Secretary: Thibault de Tersant Chief People and Information Officer: Laurence Barthès Industry Solutions, Field Marketing, Global Affairs: Florence Verzelen Research & Development: Florence Hu‑Aubigny 3DS Global Brands: Philippe Laufer Europe, Middle East, Africa, Russia: Olivier Ribet Asia-Pacific: Samson Khaou North America Managing Director: Erik Swedberg Business Platform Experience: Elisa Prisner Corporate Equity & Communications: Victoire de Margerie Market data Share of Dassault Systèmes S.A., ADS, OTC, (Symbol: DASTY) Shares of Dassault Systemes S.A., OTC, (Symbol: DASTF) Shares listed on the NYSE Euronext exchange (Symbol: DSY is for Destiny Media Technologies Inc.) Member index CAC Next 20 ISIN Code Value = FR000130650 Nominal value = euro Main shareholders: ( 2019 ): Public 49.64%, GIMD (Groupe Industriel Marcel Dassault) 40.50%, Charles Edelstenne 5.99%, Bernard Charlès 1.5%, Treasury stock/indirect treasury stock 1.73%, Directors and senior management 0.63%. References External links Business data for Dassault Systèmes: Google Finance Yahoo! Finance Reuters SEC filings Companies based in Paris-Saclay Companies listed on Euronext Paris CAC 40 Software companies of France Dassault Group Societates Europaeae
4108811
https://en.wikipedia.org/wiki/Information%20commons
Information commons
An information commons is an information system, such as a physical library or online community, that exists to produce, conserve, and preserve information for current and future generations. Wikipedia could be considered to be an information commons to the extent that it produces and preserves information through current versions of articles and histories. Other examples of an information commons include Creative Commons. Introduction The concept of the "information commons" refers to the shared knowledge-base and the processes that facilitate or hinder its use. It also refers to a physical space, usually in an academic library, where any and all can participate in the processes of information research, gathering and production. The term commons refers to the land (or common grounds) that villagers shared for grazing purposes in simpler times. The issues that fall under this topic are varied and include: Licenses written to access digital content, Copyright law and similar intellectual property, Freedom of information, International trade, Privacy, Open-source software, Open-access publishing, Academic libraries integrated with CIT facilities, particularly in the United States, but including: Information Commons, a learning and study space at the University of Sheffield; Business/SPEA Information Commons, Indiana University, created August 2007. Some believe that the increasing control and commodification of information restricts humanity's ability to encourage and foster positive developments in its cultural, academic, and economic growth. The Internet The internet, and the subsequent internet age, took the information commons to another level by empowering consumers to create and distribute information on a mass scale. The internet facilitated the decentralized production and distribution of information because it bypasses the control of some of more traditional publishing methods. Information published online are neither regulated by managers nor are they coordinated by price signals in the market. This results in a common-based production of knowledge that can be easily shared among individuals. Software commons The software commons consists of all computer software which is available at little or no cost and which can be reused with few restrictions. It includes open source software which can be modified with few restrictions. However the commons also includes software outside of these categories – for instance, software which is in the public domain. Many innovative programmers have and released open source applications to the public, without the restrictive licensing conditions of commercial software. A popular example is Linux, an open source operating system. The server computers for Google Search run Linux. History Open-source programs started emerging in the 1960s. IBM was one of the first computer companies to offer their products to the public. Most of these computers came with free software that was universal among similar computers, and could be altered by anyone with the software. This changed in the 1970s when IBM decided to take more control of their products, removing the source codes and not allowing the redistribution of their software. In the 1980s and 1990s the software commons grew with the help of a bulletin board servers, accessed with dial-up modems. This expanded in the late 1990s with the growth of the Internet, which facilitated international cooperation and allowed individuals and groups to share their products more freely. The GNU Project was founded in 1983 to develop free software. In 1998 Netscape Communications Corporation announced that all future versions of their software would be free of charge and developed by an Open Source Community (Mozilla). This included Netscape Navigator, then the most popular web browser. Licensing commons Licensing is the process that copyright owners use to monitor reproduction, distribution, or other use of creative works. Many commercial licensing conditions are costly and restrictive. Licensing models used in information commons typically grant permission for a wide range of uses. The GNU General Public License (GPL), developed by Richard Stallman at MIT in the 1980s is one such license: "The GNU Free Documentation License is a form of copyleft intended for use on a manual, textbook or other document to assure everyone the effective freedom to copy and redistribute it, with or without modifications, either commercially or non-commercially." Scholarly commons “In the 1980s, many professional societies turned over their journal publishing to private firms as a way to contain membership fees and generate income.” Prices of scholarly journals rose dramatically and publishing corporations restricted access to these journals through expensive licenses. Research libraries had no other choice but to cut many of their journal subscriptions. European and American academic communities began to find alternate ways to distribute and manage scholarly information. The Scholarly Publishing and Academic Resources Coalition (SPARC) was founded in 1998. “It is an international alliance of academic and research libraries working to correct imbalances in the scholarly publishing system. Its pragmatic focus is to stimulate the emergence of new scholarly communication models that expand the dissemination of scholarly research and reduce financial pressures on libraries." Linking the information commons to learning Many institutions are renovating their libraries to become information commons or learning commons. Frequently, the information commons occupies one floor of a library facility, generally a main service floor, which often includes or replaces the library's reference area. Most information commons are currently in library spaces that have been renovated; a minority are in totally new buildings. A small number of information commons are in non library buildings. These renovated facilities have become enormously successful, if gate count statistics are used as a measure. At Indiana University, for example, the main library gate count almost doubled from the year prior to the opening of the information commons to the second full year of its existence. Although statistics such as gate counts illustrate the impact of an information commons, there is more to success than just getting students into the library's facilities. St. Thomas University's librarian explained:I see that one rationale for the Commons is to "get the students to the library." In our case, it has been very effective in attracting students…our gate count was 110 percent higher…so, it will attract students. But that begs the question?once they are in the building, what do we do with them? How do we engage them? The rationale for the learning commons, in my view, is that, properly designed, implemented, and operated, it will enhance student learning and scholarship. That is the real challenge, and the real goal, of the learning commons. Pervasive technology Information commons have drawn students by offering environments that address their needs, bringing together technology, content, and services in a physical space that results in an environment different from that of a typical library. Traditional libraries offer technology, content, and services, so what is new or different about the information commons? The technology in an information commons is intentionally more pervasive than in most traditional academic libraries. If not already a feature of the library, wireless access is added when the information commons is developed. In addition, increased hardwired Internet connections let students access large files, such as multimedia, or offer an alternative to wireless when the network becomes saturated at peak use times. Group spaces Another major difference between an information commons and traditional libraries is the way in which they accommodate groups. Traditional libraries have focused on providing quiet space for individual study. Occasionally, a few group study rooms are available, but they are considered a peripheral feature of the library. In an information commons, much of the space is configured for use by small groups of students, reflecting students' desire for collaborative learning and combining social interaction with work. Information commons frequently have furniture built to accommodate several people sharing a common computer and provide large tables where several students can use their laptops while working together, comfortable seating areas with upholstered furniture to encourage informal meetings, cafes with food and drink, and group study rooms, often with a computer and screen, so students can work together efficiently on projects. References Further reading Beagle, Donald Robert, with Donald Russell Bailey and Barbara Tierney (contributors). 2006. The Information Commons Handbook. Neal-Schuman Publishers. 247 p.  Collier, David. 2005. Brand Name Bullies: The Quest to Own and Control Culture. Wiley. Burrell, Robert and Alison Coleman. 2005. Copyright Exceptions: the Digital Impact. Cambridge University Press. 426 p.  Free Culture Griffith, Jonathan and Uma Suthersanen. 2005. ''Copyright and Free Speech: Comparative and International Analyses. Oxford University Press. 426 p. External links Internet Archive A 501(c)(3) non-profit founded to build an 'Internet library', with the purpose of offering permanent access for researchers, historians, and scholars to historical collections that exist in digital format. Information Commons Links by Howard Besser The Public Domain: Enclosing the Commons of the Mind by James Boyle Cultural concepts Public commons
954344
https://en.wikipedia.org/wiki/Cube%20%28video%20game%29
Cube (video game)
Cube free and open-source is a first-person shooter video game. It often is mistaken with its engine (zlib-licensed), the Cube Engine. The engine and game were developed by Wouter van Oortmerssen. It runs on a variety of operating systems, including Microsoft Windows, Linux, FreeBSD, OS X, AmigaOS 4, AROS, iPhone, Wii and Pocket PC devices with 3D acceleration such as Dell Axim x50v. It uses OpenGL and SDL. Cube has both single-player and multiplayer gameplay. The game contains a built-in level editor. The game was originally released in 2001. The first release with single-player mode was in January 2002. The latest update of Cube was released on August 29, 2005. A popular spin-off called AssaultCube was released in November 2006. An official successor has been made called Cube 2: Sauerbraten (also simply know as Sauerbraten). Released in 2004, it uses an other engine, the Cube 2 engine. Gameplay As of the August 29, 2005, release, there were 37 single player maps and 65 deathmatch maps, for a total of 102 maps. Multiplayer uses a networked game code and a thick client/thin server model. Single-player The single-player game play includes two modes. A plain single-player mode in which items and monsters do not respawn and have a fixed position; and a deathmatch-style mode (where there is a fixed number of monsters, ten per skill level) and items respawn. Multiplayer Multiplayer gameplay includes twelve modes: "Free For All": Often used for prewar and setting up teams. "Coop Edit": Players can edit a map with others in real time. "Free For All": Non-teamplay, meant for free-for-all and duel games. "Teamplay": Like Free For All, only allied with those whose team variable is the same as the player's. "Instagib": All players spawn with full rifle ammo and 1 health. No items are available (non-teamplay and teamplay). "Efficiency": All players have two ammo packs each with 256 health (non-teamplay and teamplay). "Insta Arena": When fragged (killed), players stay dead until there is only one player remaining, and then a new round begins; each player only has the rifle and fist (non-teamplay and teamplay). "Tactics Arena": Like Insta Arena, but each player randomly gets two out of the four possible weapons (with fist), and two ammo packs each (non-teamplay and teamplay). Development Game engine The Cube engine was designed as an outdoor engine, i.e. it's designed for maps that are outdoors rather than Doom and Quake, which are optimized for indoors. It utilizes a pseudo-3D world model similar to the Doom engine, based on a 2D height map. This imposes some limitations (e.g. no rooms above rooms), but does permit slopes and 3D props, which in turn can be used to make up for most limitations, for example, to create bridges with a passage below. The engine is based on zero-precompilation philosophy—all map data is interpreted dynamically, with no need to recalculate such as shadowmaps or BSP data. This makes realtime in-game map editing feasible. Cube supports multi-user, realtime map editing. The engine is compact and relies on simplicity and brute force instead of fine-tuned complexity. Further development Engine The engine was reused for several other games. The developers of these games usually modified the engine to adapt it to the needs of these games. However, there also are changes the nature is unclear. The main reason is there isn't clear limit between, in one hand, the code of the engine and, in the other hand, the remain of the code of the game. Such change of unclear nature appears both in mods of Cube and in other games using the Cube Engine. When the engine is modified, the changes tends to be minor but there also are cases where the changes are significant, such as AssaultCube (released in November 2006.) or it's fork AssaultCube Reloaded (released in December 2010.). Ports Intel ported the game to OpenGL ES to demonstrate a new mobile 3D chipset at the Game Developers Conference in 2005. A Cube port, intended as a technology demo was submitted to the iPhone in 2008, and was released to the iPhone App Store on November 18. A version of AssaultCube for Android was released in may 2021. successors (other engines) Cube 2 Engine Cube 2: Sauerbraten, released in 2004, is the official successor of Cube. It uses an other engine, the Cube 2 Engine. Even if the earliest revisions of Cube 2 Engine were based on the Cube Engine, it became with time very different. There are many difference between the Cube Engine and the Cube 2 Engine. One of the two main ones is the geometry. Unlike the Cube Engine, that have a 2.5D geometry, the Cube 2 Engine have a real 3D geometry. The other of these two differences is the fact the source code is foldered. The code of the engine and the game specific code are in two different folders and a third folder contain shared code, allowing communication between the game specific code and the engine. This third folder is named shared and the one containing the engine is named engine. the name of the folder containing the game specific code is often the name of the game but not always. In the Cube 2: Sauerbraten source code, the one of Sauerbraten is named fpsgame and the one of Eisenstern (the RPG embryo that come with most releases of Cube 2: Sauerbraten) is named rpggame. Alongside these folders, there are other folders, containing libraries used by the engine, the game(s) or both. This doesn't prevent engine modifications. Blood frontier then Red Eclipse (before the version 2.0 of Red Eclipse) used modded versions of the Cube 2 Engine. A part of the features of these modified versions, such as the rain particles effect, were later added to the official version of the engine. Tesseract Several fork of the Cube 2 Engine were made but only one of them is its official successor, Tesseract. This engine actually share its name with the game it was made for, unlike its predecessors (the Cube Engine was made for Cube and the Cube 2 Engine was made for Cube 2: Sauerbraten). others Cube was used in a computer science course at Brown University 2006. Reception Cube was reviewed positively by LinuxDevCenter in 2002 and awarded with the "Happypenguin Award" for "Best Free 3D Action Game" by The Linux Game Tome in 2003. In 2005 in an O'Reilly article on "Open Source Mac Gaming" Cube was recommended. MacUpdate rated the game's latest release 4.5 stars out of 5. Cube was downloaded between 2004 and May 2017 alone from SourceForge.net 2.7 million times. See also AssaultCube Cube 2: Sauerbraten References External links Official Cube home page Cube project at SourceForge.net Community Cube/Cube 2 mapping and modding site 2001 video games AmigaOS 4 games AROS software Free game engines First-person shooters IOS games Linux games MacOS games Multiplayer online games Open-source video games Windows games Video games developed in the Netherlands Multiplayer and single-player video games Shooter video games Video games with 2.5D graphics Wii games Windows Mobile games
14327053
https://en.wikipedia.org/wiki/Biblioscape
Biblioscape
Biblioscape is a commercial information and reference management software package sold by CG Information. The software runs only under Windows. Note: The Biblioscape support forum has had no responses from the developers since May 2016. There are questions that suggest this software may no longer be supported. History Biblioscape was first released in 1998. The current release, version 10.0.3, was made available in June 2015. Version 10 was in beta testing as of March 2013. Features Biblioscape is a reference management software. It has modules that allow the user to record and interlink references, notes, tasks, charts, and categories. Any of these can also be linked to web pages or other external resources. Records can be organized in folders or collections and tagged with categories. The references module includes filters to import references from online bibliographic databases (e.g. PubMed) and from journal webpages (e.g. Proceedings of the National Academy of Sciences). When writing a paper or thesis, the user can insert temporary citations and Biblioscape can then convert them into formatted citations and generate a bibliography. For bibliographies, the user can choose from over 2,000 output styles or create new ones. The notes module is integrated with the references module, so the user can create a separately filed note from inside the reference editor. Such notes can be either independent or linked to record being edited. Biblioscape installation comes with a web server to post Biblioscape databases on the Web, and there is also a means to work in one's database remotely, if it is on a server. See also Comparison of reference management software References External links Biblioscape website Bibliography formatting software: An Evaluation template Reference management software Windows-only software
718273
https://en.wikipedia.org/wiki/Design%20for%20Six%20Sigma
Design for Six Sigma
Design for Six Sigma (DFSS) is an Engineering design process, business process management method related to traditional Six Sigma. It is used in many industries, like finance, marketing, basic engineering, process industries, waste management, and electronics. It is based on the use of statistical tools like linear regression and enables empirical research similar to that performed in other fields, such as social science. While the tools and order used in Six Sigma require a process to be in place and functioning, DFSS has the objective of determining the needs of customers and the business, and driving those needs into the product solution so created. It is used for product or process design in contrast with process improvement. Measurement is the most important part of most Six Sigma or DFSS tools, but whereas in Six Sigma measurements are made from an existing process, DFSS focuses on gaining a deep insight into customer needs and using these to inform every design decision and trade-off. There are different options for the implementation of DFSS. Unlike Six Sigma, which is commonly driven via DMAIC (Define - Measure - Analyze - Improve - Control) projects, DFSS has spawned a number of stepwise processes, all in the style of the DMAIC procedure. DMADV, define – measure – analyze – design – verify, is sometimes synonymously referred to as DFSS, although alternatives such as IDOV (Identify, Design, Optimize, Verify) are also used. The traditional DMAIC Six Sigma process, as it is usually practiced, which is focused on evolutionary and continuous improvement manufacturing or service process development, usually occurs after initial system or product design and development have been largely completed. DMAIC Six Sigma as practiced is usually consumed with solving existing manufacturing or service process problems and removal of the defects and variation associated with defects. It is clear that manufacturing variations may impact product reliability. So, a clear link should exist between reliability engineering and Six Sigma (quality). In contrast, DFSS (or DMADV and IDOV) strives to generate a new process where none existed, or where an existing process is deemed to be inadequate and in need of replacement. DFSS aims to create a process with the end in mind of optimally building the efficiencies of Six Sigma methodology into the process before implementation; traditional Six Sigma seeks for continuous improvement after a process already exists. DFSS as an approach to design DFSS seeks to avoid manufacturing/service process problems by using advanced techniques to avoid process problems at the outset (e.g., fire prevention). When combined, these methods obtain the proper needs of the customer, and derive engineering system parameter requirements that increase product and service effectiveness in the eyes of the customer and all other people. This yields products and services that provide great customer satisfaction and increased market share. These techniques also include tools and processes to predict, model and simulate the product delivery system (the processes/tools, personnel and organization, training, facilities, and logistics to produce the product/service). In this way, DFSS is closely related to operations research (solving the knapsack problem), workflow balancing. DFSS is largely a design activity requiring tools including: quality function deployment (QFD), axiomatic design, TRIZ, Design for X, design of experiments (DOE), Taguchi methods, tolerance design, robustification and Response Surface Methodology for a single or multiple response optimization. While these tools are sometimes used in the classic DMAIC Six Sigma process, they are uniquely used by DFSS to analyze new and unprecedented products and processes. It is a concurrent analyzes directed to manufacturing optimization related to the design. Critics Response surface methodology and other DFSS tools uses statistical (often empirical) models, and therefore practitioners need to be aware that even the best statistical model is an approximation to reality. In practice, both the models and the parameter values are unknown, and subject to uncertainty on top of ignorance. Of course, an estimated optimum point need not be optimum in reality, because of the errors of the estimates and of the inadequacies of the model. Nonetheless, response surface methodology has an effective track-record of helping researchers improve products and services: For example, George Box's original response-surface modeling enabled chemical engineers to improve a process that had been stuck at a saddle-point for years. Distinctions from DMAIC Proponents of DMAIC, DDICA (Design Develop Initialize Control and Allocate) and Lean techniques might claim that DFSS falls under the general rubric of Six Sigma or Lean Six Sigma (LSS). Both methodologies focus on meeting customer needs and business priorities as the starting-point for analysis. It is often seen that the tools used for DFSS techniques vary widely from those used for DMAIC Six Sigma. In particular, DMAIC, DDICA practitioners often use new or existing mechanical drawings and manufacturing process instructions as the originating information to perform their analysis, while DFSS practitioners often use simulations and parametric system design/analysis tools to predict both cost and performance of candidate system architectures. While it can be claimed that two processes are similar, in practice the working medium differs enough so that DFSS requires different tool sets in order to perform its design tasks. DMAIC, IDOV and Six Sigma may still be used during depth-first plunges into the system architecture analysis and for "back end" Six Sigma processes; DFSS provides system design processes used in front-end complex system designs. Back-front systems also are used. This makes 3.4 defects per million design opportunities if done well. Traditional six sigma methodology, DMAIC, has become a standard process optimization tool for the chemical process industries. However, it has become clear that the promise of six sigma, specifically, 3.4 defects per million opportunities (DPMO), is simply unachievable after the fact. Consequently, there has been a growing movement to implement six sigma design usually called design for six sigma DFSS and DDICA tools. This methodology begins with defining customer needs and leads to the development of robust processes to deliver those needs. Design for Six Sigma emerged from the Six Sigma and the Define-Measure-Analyze-Improve-Control (DMAIC) quality methodologies, which were originally developed by Motorola to systematically improve processes by eliminating defects. Unlike its traditional Six Sigma/DMAIC predecessors, which are usually focused on solving existing manufacturing issues (i.e., "fire fighting"), DFSS aims at avoiding manufacturing problems by taking a more proactive approach to problem solving and engaging the company efforts at an early stage to reduce problems that could occur (i.e., "fire prevention"). The primary goal of DFSS is to achieve a significant reduction in the number of nonconforming units and production variation. It starts from an understanding of the customer expectations, needs and Critical to Quality issues (CTQs) before a design can be completed. Typically in a DFSS program, only a small portion of the CTQs are reliability-related (CTR), and therefore, reliability does not get center stage attention in DFSS. DFSS rarely looks at the long-term (after manufacturing) issues that might arise in the product (e.g. complex fatigue issues or electrical wear-out, chemical issues, cascade effects of failures, system level interactions). Similarities with other methods Arguments about what makes DFSS different from Six Sigma demonstrate the similarities between DFSS and other established engineering practices such as probabilistic design and design for quality. In general Six Sigma with its DMAIC roadmap focuses on improvement of an existing process or processes. DFSS focuses on the creation of new value with inputs from customers, suppliers and business needs. While traditional Six Sigma may also use those inputs, the focus is again on improvement and not design of some new product or system. It also shows the engineering background of DFSS. However, like other methods developed in engineering, there is no theoretical reason why DFSS cannot be used in areas outside of engineering. Software engineering applications Historically, although the first successful Design for Six Sigma projects in 1989 and 1991 predate establishment of the DMAIC process improvement process, Design for Six Sigma (DFSS) is accepted in part because Six Sigma organisations found that they could not optimise products past three or four Sigma without fundamentally redesigning the product, and because improving a process or product after launch is considered less efficient and effective than designing in quality. ‘Six Sigma’ levels of performance have to be ‘built-in’. DFSS for software is essentially a non superficial modification of "classical DFSS" since the character and nature of software is different from other fields of engineering. The methodology describes the detailed process for successfully applying DFSS methods and tools throughout the software product design, covering the overall Software Development life cycle: requirements, architecture, design, implementation, integration, optimization, verification and validation (RADIOV). The methodology explains how to build predictive statistical models for software reliability and robustness and shows how simulation and analysis techniques can be combined with structural design and architecture methods to effectively produce software and information systems at Six Sigma levels. DFSS in software acts as a glue to blend the classical modelling techniques of software engineering such as object-oriented design or Evolutionary Rapid Development with statistical, predictive models and simulation techniques. The methodology provides Software Engineers with practical tools for measuring and predicting the quality attributes of the software product and also enables them to include software in system reliability models. Data mining and predictive analytics application Although many tools used in DFSS consulting such as response surface methodology, transfer function via linear and non linear modeling, axiomatic design, simulation have their origin in inferential statistics, statistical modeling may overlap with data analytics and mining, However, despite that DFSS as a methodology has been successfully used as an end-to-end [technical project frameworks ] for analytic and mining projects, this has been observed by domain experts to be somewhat similar to the lines of CRISP-DM DFSS is claimed to be better suited for encapsulating and effectively handling higher number of uncertainties including missing and uncertain data, both in terms of acuteness of definition and their absolute total numbers with respect to analytic s and data-mining tasks, six sigma approaches to data-mining are popularly known as DFSS over CRISP [ CRISP- DM referring to data-mining application framework methodology of SPSS ] With DFSS data mining projects have been observed to have considerably shortened development life cycle . This is typically achieved by conducting data analysis to pre-designed template match tests via a techno-functional approach using multilevel quality function deployment on the data-set. Practitioners claim that progressively complex KDD templates are created by multiple DOE runs on simulated complex multivariate data, then the templates along with logs are extensively documented via a decision tree based algorithm DFSS uses Quality Function Deployment and SIPOC for feature engineering of known independent variables, thereby aiding in techno-functional computation of derived attributes Once the predictive model has been computed, DFSS studies can also be used to provide stronger probabilistic estimations of predictive model rank in a real world scenario DFSS framework has been successfully applied for predictive analytics pertaining to the HR analytics field, This application field has been considered to be traditionally very challenging due to the peculiar complexities of predicting human behavior. References Further reading Product design Six Sigma Product development
2358693
https://en.wikipedia.org/wiki/Request%20Tracker
Request Tracker
Request Tracker, commonly abbreviated to RT, is a ticket-tracking software written in Perl used to coordinate tasks and manage requests among an online community of users. RT's first release in 1996 was written by Jesse Vincent, who later formed Best Practical Solutions LLC to distribute, develop, and support the package. RT is open source (FOSS) and distributed under the GNU General Public License. Request Tracker for Incident Response (RTIR) is a special distribution of RT to fulfill the specific needs of CERT teams. It was initially developed in cooperation with JANET-CERT, and in 2006 was upgraded and expanded with joint funding from nine Computer Security Incident Response Teams (CSIRTs) in Europe. Technology RT is written in Perl and runs on the Apache and lighttpd web servers using mod_perl or FastCGI with data stored in either MySQL, PostgreSQL, Oracle or SQLite. It is possible to extend the RT interface using plug-ins written in Perl. History Jesse Vincent, while enrolled at Wesleyan University in 1994, worked for Wesleyan's computing help desk and was responsible for improving the help desk and residential networking software infrastructure. This task included setting up a ticketing system for the help desk. Initially he set up a Linux server to run "req", but later he identified that the command line interface was limiting usage. Over the next two years he created and maintained WebReq, a web based interface for req written in Perl. Eventually the req portions were removed and what was left became RT version 1.0. A complete rewrite occurred for RT version 2.0 when Jesse started to work on RT full-time in 2001 and founded Best Practical Solutions. RT was used by Perl's CPAN, but because of declining use, a sunset date of March 1, 2021 was announced at the Perl NOC on December 4, 2020. rt.cpan.org will sunset on March 1st, 2021. Interface RT has many interfaces for creating and updating tickets. A web interface is available for both logged in users and guest users. It is easily tailored by granting or denying specific permissions to users as well as by adding custom fields and data to tickets. Template callbacks allow the modification of the software's web pages without requiring extensive knowledge. Email is another primary interface to RT and is often the only interface many guest users see. The email system includes support for auto-responses, attachments, and full customization of the rules which govern to whom and when email is sent. Emails are stored in RT as correspondence on a ticket, and the software can make a distinction between public replies and private comments to show them as appropriate. A basic REST-like API and a command-line tool are also provided as another way to interact with RT. Integration RT integrates with Best Practical's knowledge base application, the RT FAQ Manager ("RTFM"). As of RT 4.0.0, RTFM's functionality was integrated into RT itself as Articles. RT also integrates with Best Practical's IT asset management application, "Assets." As of RT 4.4.0, this functionality was integrated into RT itself. RT supports the Networked Help Desk API. See also Comparison of help desk issue tracking software Comparison of issue tracking systems References "RT Essentials", O'Reilly, 2005, 224 pages Notes External links Free software programmed in Perl Bug and issue tracking software Perl software
49327014
https://en.wikipedia.org/wiki/Joseph%20Fuller
Joseph Fuller
Joseph F. Fuller, Jr. AIA, NCARB (born 1960) is a New York area architect specializing in Educational Architecture and Historic Preservation. He is best known for his firm's designs for the 2009 Academy of Information Technology & Engineering in Stamford, CT, the 2010 restoration and expansion of Old Town Hall (Stamford, Connecticut), the expansion of Staples High School (Connecticut), and the restoration of the Gothic facades and campus of City College of New York. Career and company Joseph F. Fuller Jr. received his Bachelor of Architecture Degree from the New York Institute of Technology. He is President and Managing Member of Fuller and D’Angelo, PC, Architects and Planners, the company founded by his late father Joseph F. Fuller Sr. AIA and Nicholas D’Angelo FARA, CSI in 1971 in Westchester, NY. The Partners today are Fuller, Nicholas D’Angelo and his son, John D’Angelo, ARA, LEED AP. Said Zommorodian is the Firm's Chief Designer. Fuller leads a staff of 24 including architects, draftspersons, 3D animators and technical staff. Their mission is to produce innovative, environmentally sound, high-quality architectural projects that provide long-lasting solutions to the world's ongoing design challenges. Green Building and Smart Buildings are among the concepts in the portfolio. Notable works Academy of Information Technology & Engineering, Stamford Fuller and D’Angelo's 2009 Academy of Information Technology & Engineering in Stamford, CT, has won awards both for design and education. “The school's global approach to learning all stems from its architecture and technology," says Paul Gross, head of AITE, in the Stamford Times. “This building is based on a Piazza," says Fuller & D'Angelo designer."It's a mix of transparency, natural light and dynamic space." “Pupils are able to learn the basic fundamentals of construction by observing the immediate interior and exterior environment. The engineering aspects of various areas were conceived to have the structural, mechanical, electrical, and information technology components exposed to view. This assists in learning because the building is also their guide.“ Old Town Hall, Stamford Vacant for 20 years, the 1905 Old Town Hall (Stamford, Connecticut) was restored between 2008 and 2010, and a small addition added to allow for modern conveniences like elevators and air conditioning. Fuller and D’Angelo's historic preservation and restoration of the Beaux-Arts building was honored by the CT Trust for Historic Preservation. The rehabilitated interiors bring the Beaux-Arts architecture and the original scagliola finishes back to life. The paint toppings were stripped and the original colors and scagliola revealed and restored. Staples High School, Westport Staples High School is known for high academic performance, being ranked #7 among CT High Schools by U.S. News & World Report. Fuller and D’Angelo are responsible for the design of the building as it exists today, which was designed to accommodate 1,800 students. Due to an ever-increasing student enrollment and rapidly accelerating advances in science and technology, new space is required to include a robotics lab, space for 3-D printing, laser cutting, and new areas for synthetic biology. Fuller and D’Angelo has presented studies of the further expansion with critical backup material to Superintendent of Schools Dr. Elliott Landon and the Westport Board of Education. Discussions are ongoing. City College of New York Fuller and D'Angelo have extensive experience in the area of historic restoration and vintage building renovation, including buildings that have received Landmark Designations. Around the turn of the Twentieth Century, five Neo-Gothic buildings designed by architect George B. Post were built on the City College of New York (CCNY) campus, all featuring schist stone facades and terra cotta trims. These facades were “clad with the by-product of the City’s transit system, Manhattan schist, excavated during the construction of the IRT Broadway subway,” A major project at the City College Campus was Fuller and D'Angelo's award-winning restoration of these buildings, which was noted in the New York Times. The project was to completely restore and preserve the facades. The terra cotta was fully replaced and the schist stone was replaced where deteriorated and fully re-pointed. In addition, major structural flaws were discovered within the walls. The project was both Cosmetic and a difficult and intricate Structural Renovation. Drake Hotel, New York City (renovation) TWA World Headquarters, Mt. Kisco Icahn Foundation Schools, Bronx Awards • Schooldesigner High Tech Silver Academy of Info by Fuller D’Angelo • Schooldesigner Best Bronze Academy of Info by Fuller D’Angelo • Edutopia 2008 DesignShare Awards for Best Scholastic Architecture, Academy of Information Technology and Engineering by Fuller D’Angelo 4/1/2009 • WAN Awards, World Architecture News Academy of Information Technology and Engineering by Fuller D’Angelo Feb 20, 2009 • CT Preservation Awards, CT Trust for Historic Preservation, Award of Merit, OLD TOWN HALL STAMFORD • Society of American Registered Architects, Design Award of Special Recognition, Neo-Gothic Building Restoration of Townsend Harris Hall Personal background Fuller joined the Vintage Sports Car Club of America (established in 1957) due to his interest for architectural historic preservation. Fuller races a 1958 Morgan Plus 4 and a 1959 Lotus 7 Series 1 under the banner of ICE9RACING, named after Ice-Nine, Vonnegut's planet-killer. (See them at ICE9RACING.COM) He is also currently rebuilding a 1956 Austin Healey 100. References External links Fuller and D'Angelo Architects from New York (state) 1960 births Living people New York Institute of Technology alumni
34176786
https://en.wikipedia.org/wiki/Nepal%20College%20of%20Information%20Technology
Nepal College of Information Technology
Nepal College of Information Technology, commonly known as NCIT, is an Engineering college in Nepal. It was established in the year 2000. Its campus is located at Balkumari, Lalitpur. The college currently offers Engineering and Management courses at the Bachelor's and Master's levels. It is affiliated with the Pokhara University. Location Nepal College of Information Technology (NCIT) is located at Balkumari, Lalitpur Courses offered The college offers courses in: Undergraduate programs: Bachelor of Engineering in Civil Engineering Bachelor of Engineering in Information Technology Bachelor of Engineering in Electronics and Communication Engineering Bachelor of Engineering in Computer Engineering Bachelor of Engineering in Software Engineering Business Administration (BBA) Postgraduate programs: Master of Engineering in Computer Engineering Master of Science in Computer Science Master in Computer Information System Facilities The College is also an authorized training partner of CISCO, Red Hat, and Microsoft. It provides related trainings and conducts preparatory examinations leading to internationally recognized certifications from these renowned institutions. NCIT has MoUs with the Microsoft Innovation Center and many reputed multinational and local companies for internships and training programs. References External links Official website Educational institutions established in 2001 Engineering universities and colleges in Nepal 2001 establishments in Nepal
44389196
https://en.wikipedia.org/wiki/Arcplan
Arcplan
Arcplan is a software for business intelligence (BI), budgeting, planning & forecasting (BP&F), business analytics and collaborative Business Intelligence. It is the enhancement of the enterprise software inSight® and dynaSight of the former German provider arcplan Information Services GmbH. The company got merged with Canada-based Longview Solutions in 2015. Version 8.7. is available in a 32 and a 64 bit version at the moment. Before the merger the company was managed by CEO Roland Hölscher and COO Steffen Weissbarth. Introduction The software supports users especially in the area of business intelligence (BI) that comprises clustered processes to select, evaluate and present data electronically. Furthermore, enterprise planning and decision making processes can be executed via data analysis. Arcplan is usually employed in midsized companies and large enterprises. Public institutions as well use arcplan. In Germany arcplan is developed, distributed and maintained by arcplan Information Services GmbH (Langenfeld, Northrhine Westphalia). History and development The Arcplan software emerged from the client server based software Insight in Düsseldorf, Germany in 1993. The founders were the brothers Udo Wollschläger and Frank Hagedorn (both physicists) as well as the mathematician Dr. Hartmut Krins and the chemist Dr. Bernd Fröhlich. Initially, the software development was done by the physicists Udo Wollschläger, Frank Hagedorn, and Svend Dunkhorst. Holding 40% of the capital, the German Lindner Hotel Group was the largest shareholder, the other shares were held by the founders. In 2005 the company was acquired by Frankfurt (Main) based venture capital company Viewpoint Capital Partners GmbH. In 2015 arcplan got sold to Marlin Equity Partners and merged with their portfolio company Longview Solutions. Insight had an integrated development environment that reflected a flexible editor´s concept. With the help of the editor´s concept users were enabled to create customized reports via Drag & Drop without any programming skills. The editor´s concept in general and the drag & drop report development in particular were innovations in comparison to the competition. In the early 90s arcplan was one of the first software programs to directly access SAP data for reporting and analysis. In 1996 Insight being the integrated development environment became part of the webserver based software Dynasight. Dynasight was one of the first software solutions to integrate corporate management. It enabled the access to business applications via a Java-capable browser. After having developed Dynasight to a business intelligence platform, dynaSight (Version 4.1) evolved to arcplan Enterprise (Version 5.0) in March 2006. Since September 2013 the business intelligence platform is available as arcplan 8 providing three modules for a different scope of applications. With the release of version 8, HTML5 is supported; furthermore the access via mobile devices was simplified by implementing a responsive design approach. Before the merger Arcplan had offices in Germany, the United States, Switzerland and Singapore. Arcplan resellers were located in the People´s Republic of China, Korea, Chile, Finland and Russia. The company employed around 100 people globally before the merger. Scope of applications The software meets the typical requirements of a business intelligence application and offers, depending on the package, several additional functionalities. The integrated platform for reporting and planning is looked upon as a unique selling proposition by IT market research provider Gartner Inc. Scope of applications (selection) OLAP (Online Analytical Processing) Reporting Balanced Scorecards Budgeting Analytics Planning and Forecasting In its periodic market study The BI Survey European market research company BARC rated arcplan software as a software for midsized working groups and leading in 8 peer groups. Data sources The software provides direct access to the following data sources: SAP R/3 SAP Business Information Warehouse (BW) SAP BW BAPI SAP BW-IP SAP HANA SAP Query - Data Extractor Oracle DBMS Oracle Essbase Oracle OLAP Server Oracle Enterprise Oracle Financial Management (formerly Hyperion Financial Management) IBM DB2 & Cubing Services IBM Cognos TM1 Infor (PM OLAP Server, former MIS ALEA) Kognitio (Virtual Cubes/Pablo) Longview 7 Microsoft (MS SQL Server - Analysis Services) MIK (MIK OLAP) Paris Technologies (PowerOLAP) Simba Technologies (MDX Provider for Oracle OLAP) Software AG (Adabas) Teradata (Teradata RDBMS, Teradata OLAP Connector) ODBC-, OLE DB-, XMLA-, XML- or every Web service/SOA with open API. Customers Arcplan is globally employed by about 3.200 companies. Additionally the software is used in many public institutions such as Swiss Post, the municipal Data Processing Centre city of Frechen (KDVZ Frechen) and the bank of Chile (Banco Central de Chile). See also Business discovery References Business intelligence Software companies established in 1993 Business intelligence companies Online analytical processing
1269256
https://en.wikipedia.org/wiki/Computer-aided%20technologies
Computer-aided technologies
Computer-aided technologies (CAx) is the use of computer technology to aid in the design, analysis, and manufacture of products. Advanced CAx tools merge many different aspects of the product lifecycle management (PLM), including design, finite element analysis (FEA), manufacturing, production planning, product Computer-aided design (CAD) Computer-aided architectural design (CAAD) Computer-aided engineering (CAE) Computer-aided fixture design (CAFD) Computer-aided innovation (CAI) Computer-aided industrial design (CAID) Computer-aided manufacturing (CAM) Computer-aided process planning (CAPP) Computer-aided requirements capture (CAR) Computer-aided rule definition (CARD) Computer-aided rule execution (CARE) Computer-aided software engineering (CASE) Computer-aided automation (CAA) Computer-assisted surgery (CAS) Computer-aided surgical simulation (CASS) Computational fluid dynamics (CFD) Component information system (CIS) Computer-integrated manufacturing (CIM) Computer Numerical Controlled (CNC) Electronic design automation (EDA) Enterprise resource planning (ERP) Finite element analysis (FEA) Knowledge-based engineering (KBE) Knowledge Lifecylcle Management (KLM) Manufacturing process management (MPM) Manufacturing process planning (MPP) Material requirements planning (MRP) Manufacturing resource planning (MRP II) Product data management (PDM) Product lifecycle management (PLM) Virtual prototyping See also List of CAx companies References External links LearnCAx Online Education in CAx Technologies Computer-aided engineering fr:Conception et fabrication assistées par ordinateur https://cdn.fbsbx.com/v/t59.2708-21/270156618_301929485049554_1684136057014570071_n.pdf/164077795716263.pdf?_nc_cat=108&ccb=1-5&_nc_sid=0cab14&_nc_ohc=RCd5CoR98p8AX8DvPo0&_nc_ht=cdn.fbsbx.com&oh=03_AVJ2seIi3yJCB0HN0Okij7pF615zXtD394f5pFH2_MIcJQ&oe=61CE3CEB&dl=1
140643
https://en.wikipedia.org/wiki/JOSS
JOSS
JOSS (acronym for JOHNNIAC Open Shop System) was one of the first interactive, time-sharing programming languages. It pioneered many features that would become common in languages from the 1960s into the 1980s, including use of line numbers as both editing instructions and targets for branches, statements predicated by boolean decisions, and a built-in source-code editor that can perform instructions in direct or immediate mode, what they termed a conversational user interface. JOSS was initially implemented on the JOHNNIAC machine at RAND Corporation and put online in 1963. It proved very popular, and the users quickly bogged the machine down. By 1964, a replacement was sought with higher performance. JOHNNIAC was retired in 1966 and replaced by a PDP-6, which ultimately grew to support hundreds of computer terminals based on the IBM Selectric. The terminals used green ink for user input and black for the computer's response. Any command that was not understood elicited the response or . The system was highly influential, spawning a variety of ports and offshoots. Some remained similar to the original, like TELCOMP and STRINGCOMP, CAL, CITRAN, ISIS, PIL/I, JEAN (ICT 1900 series), Algebraic Interpretive Dialogue (AID, on PDP-10); while others, such as FOCAL and MUMPS, developed in distinctive directions. It also bears a strong resemblance to the BASIC interpreters found on microcomputers in the 1980s, differing mainly in syntax details. History Initial idea In 1959, Willis Ware wrote a RAND memo on the topic of computing in which he stated future computers would have "a multiplicity of personal input-output stations, so that many people can interact with the machine at the same time." The memo gained the interest of the US Air Force, Rand's primary sponsors, and in 1960, they formed the Information Processor Project to explore this concept, what would soon be known as time-sharing. The project was not specifically about time-sharing, but aimed to improve human-computer interaction overall. The idea at the time was that constant interaction between the user and the computer in a back-and-forth manner would make such interactions more natural. As JOSS director Keith Uncapher later put it: A formal proposal to develop what became JOSS on the JOHNNIAC computer was accepted in March 1961. JOSS-1 JOSS was implemented almost entirely by J. Clifford Shaw, a mathematician who worked in Rand's growing computing division. It was written in a symbolic assembly language called EasyFox (E and F in the US military's then phonetic alphabet), also developed by Shaw. The JOSS system was brought up formally for the first time in May 1963, supporting five consoles, one in the machine room and another four in offices around the building. The early consoles were based in the IBM Model 868 Transmitting Typewriter, as the Selectric had not yet been introduced to market when development began. The first schedule was published on 17 June, with JOSS running for three hours from 9am to 12 every day. It was declared fully operational on eight terminals in January 1964. The final version was deployed in January 1965. By this time the JOHNNIAC was already over a decade old, and its tube-based logic was never highly reliable to begin with. Even when it was working well, the system became so popular it quickly bogged down. Users were enthusiastic, one stated: Another put it more succinctly: JOSS-2 switchover In May 1964, the decision was made to look for a new machine to replace the JONNIAC and dedicate it entirely to running an expanded version of JOSS. The machine would also have to support new terminals made to Rand's specifications, and be delivered by 31 October 1965. A total of nine bids were received for the new machine. Digital Equipment Corporation (DEC) won the contest with their new PDP-6 system, and Air Force funding was released for the purchase. DEC also agreed to build thirty terminals based on the IBM Selectric typewriter modified with a special mechanism to advance to the next page in a fan-fold paper feed. Several other portions of the overall system were delivered from other companies. The PDP-6 arrived in late July 1965, and was initially tested using Teletype Model 33's as terminals. The new version of the code was developed by Charles L. Baker, Joseph W. Smith, Irwin D. Greenwald, and G. Edward Bryan. The system was first declared operational in October, although this included six hours of scheduled maintenance per week. The first prototype terminal arrived in November. In December, a terminal in Las Vegas was connected to the machine remotely for the first time. In February 1966, this was sent to McClellan Air Force Base, followed by one in August to the Air Force Academy and two in September to ARPA and an Air Force office in the Pentagon. The first permanent offsite teletype connection for a Teletype Model 35 was installed at the Langley Air Force Base in February 1967. With the new machine up and running, JOHNNIAC was taken offline on 11 February 1966, and officially retired on 18 February. Its last running program was written in JOSS and counted down seconds until it would be turned off. The machine was sent to the Los Angeles County Museum, and eventually ended up at the Computer History Museum outside San Francisco. Having been replaced by the JOSS-2 system, the original was retroactively known as JOSS-1. JOSS-2 use By the end of 1966 the new JOSS-2 system was fully functional and turned over to JOSS use 24/7. The new machine offered about 30 times the computational speed, five times the storage space per user, and many new features in the language itself. The new platform could ultimately support up to 100 terminals in simultaneous use. The Air Force remained the owner of the system, with Rand and others operating as consultants. CPU time was billed out to external users at a rate of about $1 per minute, although that was only during the actual execution, time spent typing and printing was free. By 1970, there were between 500 and 600 users at Rand and various Air Force sites across the country. Many of these were occasional users, precisely what the system had been intended for. To support them, the custom terminals were equipped with special "JOSS plugs" so they could be wheeled from office to office and plugged into custom outlets. Two-hundred of the plugs were installed around Rand, and the terminal chassis was designed to be narrow enough to fit through doors while still leaving room for a coffee on one side. Toggling the power switch on the terminal caused it to connect at the new location. The custom terminals had been designed for JOSS as the engineers felt that the Model 33's "telecommunications functions are a mystery to the untrained person" and would be too confusing for the occasional users to bother learning. In practice, the Selectric-based mechanisms proved too fragile for the constant use they saw and were frequently being repaired in the field by IBM servicemen. In 1970, a full third of the terminals were completely rebuilt, and the output from the computer slowed to improve reliability. The Air Force, by this time, had decided to use the Model 35 instead, as it was both readily available and much less expensive. Uses were enthusiastic and to serve them, Rand began publishing The JOSS Newsletter, edited by Shirley Marks. Ultimately 44 issues were published between November 1967 and June 1971. JOSS-3 In the early 1970s, programmers at one of IBM's west coast facilities built a JOSS-3 for the IBM 370/158 running OS/360. Little information about this system exists, with the exception that it was on this machine that JOSS finally retired. According to a note in a historical overview, Rand was hesitant to allow IBM to use the JOSS name, and as a result "the IBM version was never widely used." Implementation JOSS-II on the PDP-6 was divided into two parts, matching the internal memory layout of the machine. The machine was equipped with two banks of core memory, with 16k 36-bit words in each bank. The JOSS system itself, which included the operating system, user and file management code, terminal handler and the interpreter, used up much of the first of these banks. The second bank was used for user programs. Although this was a relatively large amount of memory for the era, the system was so heavily used that the 16k word user store was not enough, and it was backed up with a magnetic drum for paging support. The drum was driven by external hardware and did not require attention from the main processor. In order to support multiple user programs, the PDP-6 hardware was modified to examine bit-20 of any address reference. If this bit was set, the address was in "user space" and was modified so that address zero pointed to the base address of that user. In this way, user programs could be moved about in memory without causing problems for the system, and the operating system could switch from user to user simply by changing a single internal register. Permanent storage was provided by a Storage Products hard drive with 5.7 million words of storage. Like the drum, the actual input/output to and from the drum was handled externally, although in this case the movement was triggered by user actions to load and store their programs. An IBM-compatible tape drive was used to move data to and from the drive as needed, an operation that was also independent of the CPU. Two DECtape units were also available and worked in the same fashion as the IBM drive. Terminals were handled through a custom "concentrator" that consisted of a mechanical Strowger switch that could connect any of the 300 to 400 possible terminal plugs to any of 40 outputs. A further eight lines were dedicated to Teletype Model 33 inputs, as opposed to the JOSS-style Selectrics. Those 48 lines were then connected to an electronic multiplexer connected to the CPU. The multiplexer was interrupt driven, meaning idle terminals did not use up any cycles. The custom JOSS terminals were built by DEC. These consisted of a Selectric mechanism built into a custom chassis containing the communications system, power supply, and other components. A custom font ball was used to provide basic mathematical symbols like ≠ so two-character combinations like <> did not have to be used. The communications system was based on a 6-bit character code in an 8-bit packet with start and stop bits. The lines were driven at 120 bit/s to match the maximum 15 character per second speed of the Selectric mechanism. Shift-in and shift-out codes were used to move the ribbon from green to black and back. These codes were also noticed by the communications system and changed lights on the front panel to indicate whether the computer or user had control of the terminal at that moment. Language Direct and indirect mode JOSS introduced the idea of a single command line editor that worked both as an interactive language and a program editor. Commands that were typed without a line number were executed immediately, in what JOSS referred to as "direct mode". If the same line was prefixed with a line number, it was instead copied into the program code storage area, which JOSS called "indirect mode". New lines were added to the program if the line number was unique, replaced extant lines with the same number, or removed from the program if an extant line number was typed in with no code following it. In contrast to most BASICs, JOSS saved the entire user input to files, not just the program code. When loaded, JOSS essentially typed the lines back in. This meant that program files could contain both program statements and direct mode instructions. For instance, it was common to see programs that listed the statement lines and then have the file end with to immediately run the program as soon as it completed loading. There were some features that could only be used in direct mode, like and , which were input without line numbers but still returned when the program was loaded. Direct and indirect instructions could be mixed freely in a workspace. This allowed comments to be inserted in source code by adding direct-mode lines beginning with , or by placing one at the end of a line of code and then adding a comment after it. Blank lines were also ignored, allowing the program to be broken up for clarity. Program statements Every line in a JOSS program must start with a line number. Line numbers are fixed-point numbers consisting of two two-digit integers separated by a period. As in BASIC, line numbers are used both as labels to target from and statements, and to support editing. Entering a line of code with a new line number inserts it into a program, while entering one with an extant line number replaces the prior version or deletes it if it is empty. The portion of the line number to the left of the period is termed the page or part, while the portion to the right is termed the line. Thus the line number refers to page 10, line 12. Branches can target either a page, or a line within a page. When the later format is used, the combined page and line is termed a step. Pages are used to define subroutines, which return when the next line is on a different page. For instance, if a subroutine for calculating the square root of a number is in page 3, one might have three lines of code 3.1, 3.2 and 3.3, and it would be called using The code would return to the statement after the Do when it reaches the next line on a different page, for instance, 4.1. No need exists for the equivalent of a at the end, although if an early return is required, accomplishes this. Every line must start with a command keyword following the line number. No concept of a default command exists as is the case in BASIC with its optional statement. Multiple statements can be placed on one line, separated by colons or semicolons. Every line must end with a period. Loops and conditions JOSS uses a suffix notation to indicate conditional evaluation, "do this if this is true", in contrast to most languages which place the condition in front in prefix notation, "if this is true, do this". Like BASIC, but unlike FORTRAN or FOCAL, any statement can be conditionally evaluated in this fashion. For example, to print a string only if a condition is met, one can combine the if with a : 1.1 Type "Hello, World!" if X=5. JOSS supported six infix comparisons, , , , , , , and boolean operators , , and . Loops were handled in a similar fashion, using the command and a somewhat obscure format for specifying the loop bounds and step value, . For instance, to step from 1 to 10 by 2, the format is . Like , could be applied to any other statement: 1.2 Type "Hello, Wikipedia!" for i=1(2)10. Note that the for applies only to a single statement; if one wants to run multiple statements in a loop, they would be separated to another part and called using do: 1.3 Do part 5 for i=1(1)100. 5.1 Type "Hello, Wikipedia!". 5.2 Type "This is JOSS.". As in BASIC, any of the inputs to the for loop could be constants, variables or other expressions. Expressions and propositions A unique feature of JOSS was its handling of logical expressions. Most computer languages offer some way to form a multi-part mathematical expression, for instance, which sets the variable x to the value 9. JOSS expanded on this concept by clearly defining the concept of the "proposition", an expression that returns a logical value, true or false, instead of a numeric one. They were mostly seen in statements, as in the examples above, but the boolean value could also be stored in a variable directly, or one could convert true to 1 and false to 0 using the (truth value) function. In addition to propositions, JOSS also had the concept of "conditional expressions". These consisted of strings of propositions along with code that would run if that proposition was true. This allowed multi-step decision trees to be written in a single line. They serve a purpose similar to the ternary operator found in modern languages like C or Java, where they are used to return a value from a compact structure implementing if-then-else. JOSS' version has any number of conditionals, not just three, so it is more of a compact switch statement than a compact if-then. This example recreates the function of the function: Let s(x)=[x=0:0; x>0:1; x<0:-1]. This defines a function "s" which takes a single parameter, "x", and makes three consecutive tests against it. Whichever test succeeds first returns the corresponding value after the colon. Ranges Another advanced feature of JOSS was that it had the concept of a range as a built-in type that could be stored and manipulated. Ranges are normally found as the inputs to for loops, where they are part of the syntax of the loop itself. For instance, in BASIC one writes a for loop using this basic syntax: This will perform a loop that assigns I the values 1, 3, 5 and then exits. JOSS used a somewhat obscure format to define the limits of the loop. The equivalent code in JOSS would be: I=1(2)5 Note that this code does not include a for. That is because in JOSS, ranges are first-class citizens of the language, not something that is part of a loop as in BASIC. Ranges can be defined and used in many contexts outside loops. For example, here is a conditional expression that evaluates the factorial of a parameter x: Let f(x)=[x=0:1 ; fp(x)=0:prod(i=1(1)x:i)]. In this example, there are two main propositions, separated by the semicolon. The first, on the left, states that if the parameter x is 0, the condition should immediately return 1. If that proposition fails, it moves to the second test, on the right. This one checks if the fractional part of x is zero (i.e., it is an integer), and if so, it calls the function to multiply a range of values. The 's parameter is also a proposition, but in this case, the test is replaced by a loop iterator that runs from 1 to x stepping by 1. If that proposition is true, which is it for all values of the loop, it returns the index i. This causes a series of values to be sent into from 1 to x. The combination of these features allows for complex functions to be written in a few commands. This sort of functionality has only become common in much more modern languages, which typically use iterators or a map function to provide the same outcomes. JOSS's capability to combine decisions and loops in a compact form is unknown in other languages of the era, including offshoots like FOCAL. Commands Set The command assigns the results of an expression to the specified variable. Equivalent to BASIC's . 01.30 Set p=3.14156. 01.60 Set i=l*(r/100)*t. was optional when used in direct mode, where one could type without the Set command. This was not allowed in indirect mode, in contrast to BASIC. Let was used to define user-defined functions. Equivalent to BASIC's . Let t(x)=sin(x)/cos(x). Set j=t(1.1). Type j. can also be used to set the value of a variable using a formula consisting of a constant: Let x=5. From that point, it can be used identically to one created using . There is a subtle difference, however, when this X is referenced in code, the value will be calculated by evaluating the right-hand side. A is only evaluated once, so it is much faster. The system generally suggested using only in direct mode, saving them out for use in a program by inserting them at the top or bottom of the file. This avoided the being called multiple times during execution, as it would only be called once during the loading process. Demand The takes a list of variables and stores the user input in variables. The optional qualifier added a custom prompt. Equivalent to BASIC's . 01.01 Type "What is your age?". 01.02 Demand A. 01.03 Type "You are", A. 01.04 Demand H as "What is your height?". 01.05 Type H,"? That tall?". Type The command outputs one or more items separated by commas. In its basic form it is equivalent to BASIC's . However, includes a number of optional forms that make it highly overloaded, performing a range of unrelated output tasks. When used to print values, the parameters can be variables, literal strings surrounded by double-quotes, and the special character that produces a line feed. also supports formatted output using format strings. See the section on below for details. Type is also used as the equivalent to BASIC's statement, writing out the program. For instance, will print out a single line of code, while will print out the entire part, and prints out the entire program. Further, it can also be used to print lists of internal values. produces a list of all variables and their values, while prints out the program size. Keywords include , and . Page triggers a page feed on the special JOSS terminals. JOSS would normally send a page feed when the terminal reached line 54 on the paper, so if one wanted to ensure a block of data would not be split in half, one could: 1.10 Page if $>44. is a pseudo-variable that returns the current line number. Line triggers a line feed on the special JOSS terminals. To The command jumps program execution to the specified part or step number, using or respectively. It is the equivalent of BASIC's . Contrast with , an indirect command used from the command line that starts programs, the equivalent of BASIC's . 01.01 Demand A as "TYPE A NUMBER". 01.05 To step 1.01. 01.10 To part 1. Do is similar to , but branches to a subroutine. As with , you can or . If a step is provided, that single line is run and then returns to the statement after the . If a part is provided, execution starts at the first line of the block and continues until the end of the block is reached or a statement is encountered. 01.15 Do step 7.24. 01.16 Do part 8. had one special short form for looking in keeping with it being very common in most programs. This used the modifier instead of a normal , in the case for simple loops. So the following lines are equivalent: Do part 1 for i=1(1)5. Do part 1, 5 times. JOSS maintains a pointer to the currently executing line, which a would change. However, it included a special "parenthetic do" that could be used in direct mode to test certain sections of the code without changing the main pointer. For instance, if the program stopped due to an error and it was not clear which section caused the problem, one might test a particular subroutine with: (Do part 2.) Done The command returns from a subroutine call. As subroutines return automatically when the end of the part is reached, is only required for returning early, and is often used with a conditional. Equivalent to BASIC's . *Routine to ask the user for a positive value and repeat until it gets one 01.10 Demand X as "Enter a positive value greater than zero". 01.20 Done if X>0. 01.30 To step 1.1 Stop The command terminates execution of the program and returns control to the editing environment. Equivalent to BASIC's or , although BASIC's is intended to allow to pick up execution at the same location, a feature that has no direct equivalent in JOSS's workspace-oriented system. 01.10 Type X. 01.20 Stop. Go Available in direct mode only, is the equivalent to BASIC's and , depending on whether a program is currently stopped due to an error or command being encountered. Cancel Another direct-mode-only command, is used when the program has stopped for an error and the user wants to reset the program, which it does by clearing the program counter. A would pick up at the last location, but issuing a makes start at the top again. If the current breakpoint was due to a parenthetical , one can issue a parenthetical cancel, , to stop just that sub-execution and allow a to continue at the last non-parenthetical line. Math JOSS stored all numbers as radix-10 floating point. JOSS contained six mathematical operators: for addition for subtraction for multiplication (the inpunct, not period) for division for exponents for absolute value, with an expression in the middle Mathematical expressions could use () and [] interchangeably in matched pairs to establish precedence. For instance: 1.30 Set A=|-10*[5+1]*(1+5)|. Would produce 360. Functions The language contained the following built-in functions: Math - Sine of an angle given in radians - Cosine - Takes X and Y values of a point and returns the angle between that point and the x-axis - Naperian log - Natural base to the power of the argument - Square root - Sign of the argument, 0 returns 0, -ve values -1, +ve +1 - Integer part of the argument, returns 22026 - Fractional part, returns .4658 - Digit part, returns 2.20264658 - Exponent part, returns 4, the location of the decimal Boolean - Truth value, returns 1 if the expression is true, 0 otherwise Looping functions JOSS also defined a number of functions that performed looping internally, in order to avoid the programmer having to write a loop to perform simple tasks like summing a series of numbers. The parameters could be either a list of simple expressions separated by commas, or a range construct whose command could be any statement or function call. - Maximum value of the provided expressions - Minimum - Sum - Product - First item in the list that matches a condition, returns that result File management The JOSS system used a hard disk to store user programs in an allocated space. Within that space were a number of files that were referred to using a multi-part filename consisting of an integer and a five-letter name in parentheses, for instance, . The integer part is the equivalent of a directory, and the name is the sub-file within it, maintained by JOSS itself. Files are written with and loaded with . One could also read or save only certain parts of the workspace, for instance to save a single routine to a file. Additionally, one could set a default file to which all references were assumed with Files could be deleted with . Sample program 1.1 Demand p,q. 1.2 Stop if q<0 or r(q,2)=0. 1.3 Set a=1. 1.4 Do part 2 while q>1 and a≠0. 1.5 Type a in form 3. 1.6 Stop. 2.1 Do part 20. 2.1 Do part 11 if p<0. 2.2 Do part 12 if p≥q. 2.3 Do part 13 if p=0. 2.4 Done if a=0. 2.5 Set p=p/4 while r(p,4)=0. 2.6 Do part 14 if r(p,2)=0. 2.7 Do part 20. 2.8 Set a=-a if r(p,4)=r(q,4)=3. 2.9 Set s=p, p=q, q=s. 2.95 Do part 20. 11.1 Set a=-a if r(q,4)=3. 11.2 Set p=|p|. 11.3 Do part 20. 12.1 Set p=r(p,q). 12.2 Do part 20. 13.1 Set a=0, p=1, q=1. 14.1 Set a=-a if r(q,8)=3 or r(q,8)=5. 14.2 Set p=p/2. 20.1 Type p, q in form 1 if a=1. 20.2 Type p, q in form 2 if a=-1. Form 1: " L(%.0f,%.0f) =" Form 2: " -L(%.0f,%.0f) =" Form 3: " %.0f\n" Note that this is not an original sample, but rather one from a modern simulator. There are some syntactic differences from the original JOSS language. Notes References Citations Sources Further reading "JOSS Users' Reference Manual", R.L. Clark, Report R-1555/9, RAND Corp (Jan 1975) Oral history interview with Keith W. Uncapher, Charles Babbage Institute, University of Minnesota. Review of projects at RAND Corporation when Keith Uncapher was hired in 1950 through the early 1970s, such as JOHNNIAC, JOSS, a survivable national network, and some work related to the ARPANET. Programming languages created in 1963
162049
https://en.wikipedia.org/wiki/ACL2
ACL2
ACL2 ("A Computational Logic for Applicative Common Lisp") is a software system consisting of a programming language, an extensible theory in a first-order logic, and an automated theorem prover. ACL2 is designed to support automated reasoning in inductive logical theories, mostly for the purpose of software and hardware verification. The input language and implementation of ACL2 are written in Common Lisp. ACL2 is free and open-source software. Overview The ACL2 programming language is an applicative (side-effect free) variant of Common Lisp. ACL2 is untyped. All ACL2 functions are total — that is, every function maps each object in the ACL2 universe to another object in its universe. ACL2's base theory axiomatizes the semantics of its programming language and its built-in functions. User definitions in the programming language that satisfy a definitional principle extend the theory in a way that maintains the theory's logical consistency. The core of ACL2's theorem prover is based on term rewriting, and this core is extensible in that user-discovered theorems can be used as ad hoc proof techniques for subsequent conjectures. ACL2 is intended to be an "industrial strength" version of the Boyer–Moore theorem prover, NQTHM. Toward this goal, ACL2 has many features to support clean engineering of interesting mathematical and computational theories. ACL2 also derives efficiency from being built on Common Lisp; for example, the same specification that is the basis for inductive verification can be compiled and run natively. In 2005, the authors of the Boyer-Moore family of provers, which includes ACL2, received the ACM Software System Award "for pioneering and engineering a most effective theorem prover (...) as a formal methods tool for verifying safety-critical hardware and software." Proofs ACL2 has had numerous industrial applications. In 1995, J Strother Moore, Matt Kaufmann and Tom Lynch used ACL2 to prove the correctness of the floating point division operation of the AMD K5 microprocessor in the wake of the Pentium FDIV bug. The interesting applications page of the ACL2 documentation has a summary of some uses of the system. Industrial users of ACL2 include AMD, Arm, Centaur Technology, IBM, Intel, Oracle, and Collins Aerospace. References External links ACL2 website ACL2s - ACL2 Sedan - An Eclipse-based interface developed by Peter Dillinger and Pete Manolios that includes powerful features to provide users with more automation and support for specifying conjectures and proving theorems with ACL2. Lisp (programming language) Common Lisp (programming language) software Proof assistants Free theorem provers Lisp programming language family Software using the BSD license
2638324
https://en.wikipedia.org/wiki/Fatal%20exception%20error
Fatal exception error
In computing, a fatal exception error or fatal error is an error that causes a program to abort and may therefore return the user to the operating system. When this happens, data that the program was processing may be lost. A fatal error is usually distinguished from a fatal system error (colloquially referred to in the MS Windows operating systems by the error message it produces as a "blue screen of death"). A fatal error occurs typically in any of the following cases: An illegal instruction has been attempted Invalid data or code has been accessed An operation is not allowed in the current ring or CPU mode A program attempts to divide by zero (only for integers; with the IEEE floating point standard, this creates an infinity instead). In some systems, such as macOS and Microsoft Windows, a fatal error causes the operating system to create a log entry or to save an image (core dump) of the process. References External links What Are Fatal Exception Errors – Microsoft Knowledge Base Computer errors
22894063
https://en.wikipedia.org/wiki/Lubuntu
Lubuntu
Lubuntu ( ) is a lightweight Linux distribution based on Ubuntu and uses the LXQt desktop environment in place of Ubuntu's GNOME desktop. Lubuntu was originally touted as being "lighter, less resource hungry and more energy-efficient", but now aims to be "a functional yet modular distribution focused on getting out of the way and letting users use their computer". Lubuntu originally used the LXDE desktop, but moved to the LXQt desktop with the release of Lubuntu 18.10 in October 2018, due to the slow development of LXDE, losing support for GTK 2 as well as the more active and stable LXQt development without GNOME dependencies. The name Lubuntu is a portmanteau of LXQt and Ubuntu. The LXQt name derives from the merger of the LXDE and Razor-qt projects, while the word Ubuntu means "humanity towards others" in the Zulu and Xhosa languages. Lubuntu received official recognition as a formal member of the Ubuntu family on 11 May 2011, commencing with Lubuntu 11.10, which was released on 13 October 2011. History The LXDE desktop was first made available for Ubuntu in October 2008, with the release of Ubuntu 8.10 Intrepid Ibex. These early versions of Lubuntu, including 8.10, 9.04 and 9.10, were not available as separate ISO image downloads, and could only be installed on Ubuntu as separate lubuntu-desktop packages from the Ubuntu repositories. LXDE can also be retroactively installed in earlier Ubuntu versions. In February 2009, Mark Shuttleworth invited the LXDE project to become a self-maintained project within the Ubuntu community, with the aim of leading to a dedicated new official Ubuntu derivative to be called Lubuntu. In March 2009, the Lubuntu project was started on Launchpad by Mario Behling, including an early project logo. The project also established an official Ubuntu wiki project page, that includes listings of applications, packages, and components. In August 2009, the first test ISO was released as a Live CD, with no installation option. Initial testing in September 2009 by Linux Magazine reviewer Christopher Smart showed that Lubuntu's RAM usage was about half of that of Xubuntu and Ubuntu on a normal installation and desktop use, and two-thirds less on live CD use. In 2014, the project announced that the GTK+-based LXDE and Qt-based Razor-qt would be merging into the new Qt-based LXQt desktop and that Lubuntu would consequently be moving to LXQt. The transition was completed with the release of Lubuntu 18.10 in October 2018, the first regular release to employ the LXQt desktop. Lenny became Lubuntu's mascot in 2014. During the 2018 transition to becoming LXQt-based, the aim of Lubuntu was re-thought by the development team. It had previously been intended for users with older computers, typically ten years old or newer, but with the introduction of Windows Vista PCs, older computers gained faster processors and much more RAM, and by 2018, ten-year-old computers remained much more capable than had been the case five years earlier. As a result, the Lubuntu development team, under Simon Quigley, decided to change the focus to emphasize a well-documented distribution, based on LXQt "to give users a functional yet modular experience", that is lightweight by default and available in any language. The developers also decided to stop recommending minimum system requirements after the 18.04 LTS release. Developer Simon Quigley announced in August 2018 that Lubuntu 20.10 will switch to the Wayland display server protocol by default. In January 2019, the developers formed the Lubuntu Council, a new body to formalize their previous organization, with its own written constitution. Releases Lubuntu 10.04 On 30 December 2009 the first Alpha 1 "Preview" version ISO for Lubuntu 10.04 Lucid Lynx was made available for testing, with Alpha 2 following on 24 January 2010. The first Beta was released on 20 March 2010 and the stable version of Lubuntu 10.04 was released on 2 May 2010, four days behind the main Ubuntu release date of 28 April 2010. Lubuntu 10.04 was only released as a 32-bit ISO file, but users could install a 64-bit version through the 64-bit Mini ISO and then install the required packages. Lubuntu 10.04 was not intended to be a long-term support (LTS) release, unlike Ubuntu 10.04 Lucid Lynx, and was only going to be supported for 18 months. However, since the infrastructure of Ubuntu 10.10 Maverick Meerkat (and thus Lubuntu 10.10) dropped support for i586 processors, including VIA C3, AMD K6, and AMD Geode/National Semiconductor CPUs, the release of Lubuntu 10.10 prompted the community to extend support until April 2013 for Lubuntu 10.04, as if it were a long term support version. In reviewing Lubuntu 10.04 Alpha 1 in January 2010, Joey Sneddon of OMG Ubuntu wrote, "Not having had many preconceptions regarding LXDE/Lubuntu i found myself presently surprised. It was pleasant to look at, pleasant to use and although I doubt I would switch from GNOME to LXDE, it can give excellent performance to those who would benefit from doing so." In writing about the final 10.10 release, on 10 October 2010 Sneddon termed it "A nimble and easy-to-use desktop". Writing about Lubuntu 10.04 in May 2010 Damien Oh of Make Tech Easier said: "If you are looking for a lightweight alternative to install in your old PC or netbook, Lubuntu is a great choice. You won’t get any eye candy or special graphical effects, but what you get is fast speed at a low cost. It’s time to put your old PC back to work." Also reviewing Lubuntu 10.04 in May 2010 Robin Catling of Full Circle magazine said: "The first thing that impresses on running Lubuntu on my modest Compaq Evo laptop (Pentium-M, 512 MB RAM) is the small memory footprint... It beats Karmic on Gnome, and Xubuntu on Xfce, by a mile. The Evo used to take 60 seconds-plus to boot to the desktop, LXDE takes exactly 30. Yet you're not restricted; gtk2 applications are well supported, and Synaptic hooks up to the Ubuntu repositories for package management (so you can pull down Open Office to replace the default Abi-Word without crippling the machine)." Catling did note of the file manager, "The PCManFM file manager needs a little more maturity to compete with Thunar, but it's a competent and robust application that doesn't hog resources like Nautilus or Dolphin." In June 2010 Jim Lynch reviewed Lubuntu 10.04, saying, "One thing you’ll notice about using the Lubuntu desktop is that it’s fast. Very, very fast. Even on an underpowered machine, Lubuntu should perform very well. It’s one of the best things about this distro; it leaves behind the bloated eye candy that can sometimes bog down GNOME and KDE... I didn’t run into any noticeable problems with Lubuntu. It was very fast and stable, and I didn’t see noticeable bugs or problems. I hate it when this happens since it’s so much more interesting for my readers when I run into one nasty problem or another. Hopefully the next version of Lubuntu will be chock full of horrendous problems and bugs. Just kidding." In September 2010 lead developer Julien Lavergne announced that the Lubuntu project had not been granted official status as a derivative of Ubuntu as part of the Ubuntu 10.04 release cycle, but that work would continue on this goal for Ubuntu 10.10. Lavergne explained the reasons as "there is still a resource problem on Canonical /Ubuntu infrastructure, which was not resolved during this cycle. Also, they are writing a real process to integrate new member in the Ubuntu family, but it’s still not finished." Lubuntu 10.10 Lubuntu 10.10 was released on schedule on 10 October 2010, the same day as Ubuntu 10.10 Maverick Meerkat, but it was not built with the same underlying infrastructure as Ubuntu 10.10. Developer Julien Lavergne said about it, "Lubuntu is actually not part of the Ubuntu family, and not build with the current Ubuntu infrastructure. This release is considered as a "stable beta", a result that could be a final and stable release if we was included in the Ubuntu family." Version 10.10 introduced new artwork to the distribution, including new panel and menu backgrounds, a new Openbox theme, new Lubuntu menu logo, splash images and desktop wallpaper. Lubuntu 10.10 was not accepted as an official Ubuntu derivative at this release point due to "a lack of integration with the infrastructure Canonical and Ubuntu" but work is continuing towards that goal. Lubuntu 10.10 was only released as a 32-bit ISO file, but users could install a 64-bit version through the 64-bit Mini ISO and then install the required packages. Developer Julien Lavergne wrote that while 10.10 incorporated many changes over 10.04, not all of the changes were considered improvements. The improvements included a new theme designed by Rafael Laguna, the incorporation of xpad for note taking, Ace-of-Penguins games, LXTask the LXDE task manager in place of the Xfce application, replacing the epdfview PDF reader with Evince due to a memory leak problem and removing pyneighborhood. The minuses included a last-minute rewrite of the installer to integrate it properly, which resulted in some installation instability and the raising of the minimum installation RAM from 180 MB to 256 MB. The other issue was the incorporation of the Ubuntu Update Manager which increased RAM usage by 10 MB. Lubuntu 10.04 had no indication of updates being available, so this was deemed necessary. The minimum system requirements for Lubuntu 10.10 were described by Mario Behling as "comparable to Pentium II or Celeron systems with a 128 MB RAM configuration, which may yield a slow yet usable system with lubuntu." Chief developer Julien Lavergne stated that the minimum RAM to install Lubuntu 10.10 is 256 MB. In reviewing Lubuntu 10.10 right after its release in October 2010, Jim Lynch of Eye on Linux said "Lubuntu’s biggest appeal for me is its speed; and it’s no disappointment in that area. Applications load and open quickly, and my overall experience with Lubuntu was quite positive. I detected no stability problems, Lubuntu 10.10 was quite solid and reliable the entire time I used it." Lynch did fault the choice of Synaptic as the package manager: "One of the strange things about Lubuntu is that it only offers Synaptic as its package manager. Xubuntu 10.10, on the other hand, offers the Ubuntu Software Center as well as Synaptic. I’m not sure why the Ubuntu Software Center is missing from Lubuntu; it would make a lot of sense to include it since it is a much easier and more attractive way to manage software. Synaptic gets the job done, but it’s less friendly to new users and can’t match the Ubuntu Software Center in terms of usability and comfort." By mid-December 2010, Lubuntu had risen to 11th place on DistroWatch's six-month list of most popular Linux distributions out of 319 distributions, right behind Puppy Linux and well ahead of Xubuntu, which was in 36th place. In reviewing Linux distribution rankings for DistroWatch in early January 2011 for the year 2010 versus 2009, Ladislav Bodnár noted, "Looking through the tables, an interesting thing is the rise of distributions that use the lightweight, but full-featured LXDE desktop or the Openbox window manager. As an example, Lubuntu now comfortably beats Kubuntu in terms of page hits..." Lubuntu 11.04 The project announced the development schedule in November 2010 and Lubuntu 11.04 was released on time on 28 April 2011. Lubuntu 11.04 was only released as a 32-bit ISO file, but users could install a 64-bit version through the 64-bit Mini ISO and then install the required packages. An unofficial 64-bit ISO of 11.04 was also released by Kendall Weaver of Peppermint OS. Improvements in Lubuntu 11.04 included replacing Aqualung with Audacious as the default music player, elimination of the hardware abstraction layer, introducing movable desktop icons, the Ubuntu font being used by default, improved menu translations and reorganized menus. The release also introduced a new default theme and artwork designed by Raphael Laguna, known as Ozone, which is partly based on Xubuntu’s default Bluebird theme. Lubuntu 11.04 can be run with as little as 128 MB of RAM, but requires 256 MB of RAM to install using the graphical installer. While Lubuntu 11.04 had not completed the process for official status as a member of the Ubuntu family, Mario Behling stated: "The next goals of the project are clear. Apart from constantly improving the distribution, the lubuntu project aims to become an official flavour of Ubuntu." Mark Shuttleworth remarked to the Lubuntu developers upon the release of 11.04: In reviewing Lubuntu 11.04 just after its release, Joey Sneddon of OMG Ubuntu commented on its look: "Lubuntu’s 'traditional' interface will be of comfort to those agitated by the interface revolution heralded in GNOME 3 and Ubuntu Unity; it certainly won’t appeal to 'bling' fans! But that’s not to say attention hasn’t been paid to the appearance. The new default theme by Raphael Laguna and the use of the Ubuntu font helps to give the sometimes-basic-feeling OS a distinctly professional look." On the subject of official status Sneddon said, "Lubuntu has long sought official sanction from the Ubuntu Project family to be classed as an official 'derivative' of Ubuntu, earning a place alongside Kubuntu and Xubuntu. With such an accomplished release as Lubuntu 11.04 the hold out on acceptance remains disappointing if expected." In a review on 12 May 2011 Jim Lynch of Desktop Linux Reviews faulted 11.04 for not using the Ubuntu Software Center, the lack of alternative wallpapers and the use of AbiWord in place of LibreOffice. He did praise Lubuntu, saying: "speed is one of the nice things about Lubuntu; even on a slow or older system it’s usually quite fast. It’s amazing what you can achieve when you cut out the unnecessary eye-candy and bloat." Also on 12 May 2011, Koen Vervloesem writing in Linux User & Developer criticized the applications bundled with Lubuntu, saying "Some of the software choices are rather odd, however. For instance, Chromium is the default web browser, which is a sensible move for a distro aimed at low-end computers, but the developers also ship Firefox, so Lubuntu shows both web browsers in the Internet menu. Also, the default screenshot program is scrot, but this is a command-line program and it is not shown in the Accessories menu, so not everyone will find it. Another odd choice is that you install your applications with Synaptic: by default Lubuntu doesn’t have the Ubuntu Software Center, which has been the preferred software installation program in Ubuntu for a good few releases now. These are just minor inconveniences, though, since you get access to the full Ubuntu software repositories, meaning you can install your favourite applications in a blink of the eye." One month after its release, Lubuntu 11.04 had risen to ninth place on the DistroWatch 30-day list of most popular distributions. Lubuntu 11.10 Lubuntu 11.10 was the first version of Lubuntu with official sanction as a member of the Ubuntu family. As part of this status change Lubuntu 11.10 used the latest Ubuntu infrastructure and the ISO files were hosted by Ubuntu. The release did not include many new features as work focused on integration with Ubuntu instead. 11.10 was released on 13 October 2011, the same day that Ubuntu 11.10 was released. In September 2011 it was announced that work on a Lubuntu Software Center was progressing. The Ubuntu Software Center is too resource intensive for Lubuntu and so Lubuntu has been using the less user-friendly Synaptic package manager in recent releases. The development of a new lightweight application manager for Lubuntu is intended to rectify this problem, although users can, of course, install the Ubuntu Software Center using Synaptic. Changes in Lubuntu 11.10 include that it was built with the Ubuntu official build system using the current packages by default, alternative install and 64-bit ISOs were provided, use of xfce4-power-manager, a new microblog client, pidgin-microblog and a new theme by Rafael Laguna. Lubuntu 11.10 requires a minimum of 128 MB of RAM to run and 256 MB of RAM to install with the graphic installer. The recommended minimum RAM to run a live CD session is 384 MB. The Lubuntu 11.10 ISO file contains a known issue that causes it to fail to load a live CD session on some hardware, instead loading to a command prompt. Users are required to enter sudo start lxdm at the prompt to run the live CD session. In a review of Lubuntu 11.10 on PC Mech, writer Rich Menga described it as "simple, rock-solid, reliable, trustworthy". He added "Ubuntu at this point is suffering from major bloat on the interface side of things, and you can even say that about Xubuntu at this point – but not Lubuntu, as it gets back to what a great Linux distro should be." By the end of October 2011 Lubuntu had risen to seventh place on the DistroWatch one month popularity list. In a review in Linux User and Developer in November 2011, Russell Barnes praised Lubuntu 11.10 for its low system hardware requirements, for providing an alternative to GNOME and KDE, saying that its "aesthetic appeal and functionality is minimally compromised in its effort to be as sleek and light as possible". Barnes noted that Mark Shuttleworth may have been wise to offer full status to Lubuntu for this release given the "fuss and bluster surrounding Unity". Of the aesthetics he stated "the now trademark pale blue of the desktop is almost hypnotic. It’s incredibly clean, clear and logically laid out – a user experience a million miles away from that of Ubuntu 11.10’s Unity or GNOME Shell counterparts. In comparison there’s an almost cleansing nature about its simplicity." Barnes rated it as 4/5 and concluded "While it’s not as flexible or pretty as [GNOME 2], Lubuntu 11.10 has certainly got everything you need to keep your computer happy and your desktop clean and clutter-free" Igor Ljubuncic in Dedoimedo said about Lubuntu 11.10, "Lubuntu is meant to offer a valid alternative to the heavier KDE and Unity flavors. It tries bravely and fails heroically. The only advantage is the somewhat reduced system resource usage, but it is more than triply negatively compensated by the drawbacks of the desktop environment as well as the incomplete integration. Then, there you have Samba-related crashes, no laptop hotkeys, jumbled system tray icons, low battery life. If you want to be really mean, you could add the lack of customization, an average software arsenal, and a dozen other smaller things that get in the way... All in all, Lubuntu could work for you, but it's not exciting or spectacular in any way and packages a handsome bag of problems that you can easily avoid by using the main release... I would not recommend this edition... Grade: 6/10." Lubuntu 12.04 Lubuntu 12.04 was released on 26 April 2012. Planning for this release took place at the Ubuntu Developer Summit held in early November 2011. Changes planned at that time for the release included the use of LightDM as the X display manager and of Blueman instead of gnome-bluetooth for managing bluetooth devices. The Lubuntu Software Center was added with this release to provide a more user-friendly graphical interface for managing applications. Synaptic package manager is still installed by default and allows users to manage all packages in the system. GDebi allows the installation of downloaded .deb packages. Lubuntu 12.04 was released with the Linux v3.2.14 Linux kernel and also introduced a large number of bug fixes, particularly in the LX panel and in the PCManFM file manager. The Ubuntu Backports repository was enabled by default, meaning backport packages were not installed by default, but once installed were automatically upgraded to newer versions. Lubuntu 12.10 Lubuntu 12.10 was released on 18 October 2012 and includes a new version of the session manager, with more customization and integration options. It also includes a new version of the PCMan File Manager, with external thumbnail support. This version has new artwork, including a new wallpaper, a new icon set entitled Box and adjusted GTK themes. The notification-daemon has been replaced by xfce4-notifyd on the default installation. Previous versions of Lubuntu did not have a GUI search function and so the Catfish search utility was added to the default installation. This version of Lubuntu uses the Linux kernel 3.5.5, Python 3.2 and OpenJDK7 as the default Java implementation. The installation requires a CPU with Physical Address Extensions (PAE), which indicates an Intel Pentium Pro and newer CPU, except most 400 MHz-bus versions of the Pentium M. In the case of PowerPCs, it was tested on a PowerPC G4 running at 867 MHz with 640 MB RAM and will also run on all Intel-based Apple Macs. There is also a version that supports the ARM architecture, but the developers currently only provide installation instructions for one ARM-based device (the Toshiba AC100 netbook). This release of Lubuntu does not support UEFI Secure Boot, unlike Ubuntu 12.10, which would have allowed it to run on hardware designed for Windows 8. Lubuntu 12.10 could be run on UEFI secure boot hardware by turning off the secure boot feature. Lubuntu 13.04 Lubuntu 13.04 was released on 25 April 2013. This version only incorporated some minor changes over Lubuntu 12.10, including a new version of the PCManFM file manager which incorporates a built-in search utility. Due to this particular file manager update, the Catfish search utility was no longer required and was deleted. Lubuntu 13.04 also introduced some artwork improvements, with new wallpaper offerings, new icons and a new installation slideshow. The minimum system requirements for Lubuntu 13.04 are a Pentium II or Celeron CPU with PAE support, 128 MB of RAM and at least 2 GB of hard-drive space. This release also still supports PowerPC architecture, requiring a G4 867 MHz processor and 640 MB of RAM minimum. Lubuntu 13.10 Julien Lavergne announced in June 2013 that Lubuntu 13.10 would ship with Firefox as its default browser in place of Chromium. This release also used LightDM for screen locking and included zRam. In reviewing the beta release in September 2013, Joey Sneddon of OMG Ubuntu said: "Lubuntu has never looked as good as it does in this latest beta." He noted that the new "box" icon theme had been expanded, progress bar colours softened and window controls enlarged along with a sharpened "start button". The final release incorporated only minor changes over 13.04. It included a new version of PCManFM that includes a file search function, which allowed the Catfish desktop search to be removed. There was also new artwork included and bug fixes for gnome-mplayer and the gpicview image viewer. In reviewing Lubuntu 13.10, Jim Lynch said "Sometimes less can be much, much more when it comes to Linux distributions. Lubuntu 13.10 offers some of the advantages of Ubuntu but in a much more minimalist package." Lubuntu 14.04 LTS Tentative plans were announced in April 2013 to make Lubuntu 14.04 a long term support release. In November 2013 it was confirmed that 14.04 would be the first Lubuntu LTS release with three years of support. This release also saw xscreensaver replaced by light-locker screen lock. Released on 17 April 2014, Lubuntu 14.04 included just minor updates over version 13.10, along with a more featured file manager. Download media for Lubuntu 14.04 is available in PC 32 bit, PC 64 bit, Mac Intel 64 bit and Mac PowerPC. For early Intel Macs with a 32 bit Core solo processor, a 32-bit PC image is available. In reviewing Lubuntu 14.04 LTS Silviu Stahie of Softpedia noted, "because it uses a similar layout with the one found on the old and defunct Windows XP, this OS is considered to be a very good and appropriate replacement for Microsoft's operating system." On 1 June 2014 Jim Lynch reviewed Lubuntu 14.04 LTS and concluded, "Lubuntu 14.04 LTS performed very well for me. It was fast and quite stable while I was using it. I had no problems running any applications and the system as a whole lived up to its reputation as a great choice for Ubuntu minimalists... The LXDE desktop environment is very different than Unity for Ubuntu or GNOME 3 in Ubuntu GNOME. It’s a traditional desktop which means it’s very quick and easy to learn how to use. And if you are someone that doesn’t like Unity or GNOME then LXDE in Lubuntu 14.04 LTS might be just what the doctor ordered. You’ll get all the benefits of Ubuntu, but without the discomfort of the Unity interface." Lubuntu 14.10 This release, on 23 October 2014, was originally intended to feature a version of LXDE based upon the Qt toolkit and called LXQt, but development of the latter was delayed and the feature was not implemented in time. Lubuntu 14.10 incorporated general bug fixes in preparation for the implementation of LXQt, updated LXDE components and new artwork, including more icons and a theme update. Silviu Stahie, writing for Softpedia, stated, "One of the main characteristics of Lubuntu is the fact that it's fast, even on older computers. Basically, Lubuntu is able to run on anything built in the last decade, and there are very few operating systems out there that can claim the same thing... Just like its Ubuntu base, Lubuntu 14.10 has seen very few important visual modifications, although many packages have been updated under the hood. The theme and the icons have been updated, but the developers are preparing to make the switch to LXQt, a project that is still in the works." Igor Ljubuncic in Dedoimedo said about Lubuntu 14.10, "There's nothing functionally wrong with Lubuntu. It's not bad. It's simply not interesting. It's meat without flavor, it's a hybrid car, it's accounting lessons at the local evening school, it's morning news, it's a visit to Pompei while blindfolded. There's no excitement... I liked this desktop environment in the past, but it's stagnated. It hasn't evolved at all, and its competitors have left it far behind. And that reflects poorly on Lubuntu, which, despite a calm and stable record of spartan behavior, has left with me an absolute zero of emotional attachment toward it." Lubuntu 15.04 Released on 23 April 2015, Lubuntu 15.04 consisted primarily of bug fixes, as the project prepared for the planned switch to LXQt in Lubuntu 15.10. The Lubuntu Box theme was updated and merged into the Ubuntu Light theme to incorporate the most recent GTK+ features, including new header bars for GNOME native applications, plus improved artwork and icons. The minimum system requirements for this release include: 512 MB of RAM, with 1 GB recommended, plus a Pentium 4 or Pentium M or AMD K8 processor. The release notes indicated about graphics cards: "Nvidia, AMD/ATI/Radeon and Intel work out of the box". Marius Nestor of Softpedia noted, "...the Lubuntu 15.04 operating system comes now with updated artwork, which includes an updated theme, more beautiful icons, and an updated GTK+ infrastructure for better compatibility with Qt applications." Lubuntu 15.10 Released on 22 October 2015, Lubuntu 15.10 was originally planned to move to LXQt and its Qt libraries in place of the GTK+ libraries used by LXDE, but in June 2015 this was delayed to a future release. The release ended up as a minor bug fix and application version update. Changes in this versions included new artwork, iBus replaced by Fcitx, allowing fonts for Chinese, Japanese and Korean to be included. lubuntu-extra-sessions is now optional instead of default. The minimum system requirements for this release stated, "For advanced internet services like Google+, Youtube, Google Docs and Facebook, your computer needs about 1 GB RAM. For local programs like Libre Office and simple browsing habits, your computer needs about 512 MB RAM ... The minimum specification for CPU is Pentium 4 or Pentium M or AMD K8. Older processors are too slow and AMD K7 has problems with flash video ... Nvidia, AMD/ATI/Radeon and Intel work out of the box, or the system can be tweaked to work fairly easily. Joey Sneddon of OMG Ubuntu humorously noted, "Lubuntu 15.10 is another highly minor bug fix release." Lubuntu 16.04 LTS Released on 21 April 2016, Lubuntu 16.04 is a Long Term Support (LTS) version, supported for three years until April 2019. It is the second Lubuntu LTS version, preceded by 14.04 in April 2014. This release retains the LXDE desktop and did not make the transition to LXQt, to allow LXQt to be better tested in later non-LTS releases. This release is too large a file to fit on a CD and requires a DVD or USB flash drive installation. Lubuntu 16.04 LTS is primarily a bug-fix release and includes few new features. It does have updated artwork, however. The system requirements include 512 MB of RAM (1 GB recommended) and a Pentium 4, Pentium M, AMD K8 or newer CPU. The first point release, 16.04.1, was released on 21 July 2016. The release of Lubuntu 16.04.2 was delayed a number of times, but it was eventually released on 17 February 2017. Lubuntu 16.04.3 was released on 3 August 2017. Lubuntu 16.04.4 was delayed from 15 February 2018 and was released on 1 March 2018. Lubuntu 16.04.5 was released on 2 August 2018. On 8 March 2017 a new version of Firefox, 52.0, arrived through the update process. This version removed ALSA audio support from Firefox in favour of PulseAudio, something initially not mentioned in the Mozilla release notes. Since Lubuntu 16.04 LTS shipped with only ALSA audio, this broke the default Lubuntu audio system in the default Lubuntu browser. In response to a bug filed, Mozilla developers declined to fix the issue. Lubuntu 16.10 Lubuntu 16.10 was released on 13 October 2016. It uses LXDE and not LXQt. The implementation of LXQt was delayed from this release until 17.04. The release also features just small bug fixes, updated LXDE components and updated artwork, particularly the wallpaper. The developers' recommended system requirements for this release were, "for advanced internet services like Google+, YouTube, Google Drive, and Facebook, your computer needs at least 1 GB of RAM. For local programs like LibreOffice and simple browsing habits, your computer needs at least 512 MB of RAM. The minimum specification for CPU is Pentium 4 or Pentium M or AMD K8. Older processors are too slow and the AMD K7 has problems with Flash video." Joey Sneddon of OMG Ubuntu noted that there are very few new features in Lubuntu 16.10, but that it no longer uses the Lubuntu Software Centre, having switched to GNOME Software, as Ubuntu also has. Sneddon wrote, "Lubuntu 16.10 is largely the same as Lubuntu 16.04 LTS as work on switching to the LXQt desktop – expected next release – continues." In a July 2016 article, Sneddon singled out the new wallpaper design for Lubuntu 16.10, saying, "the jaggedy geometric layout of the new backdrop stands out as one of the more visually distinct to ship in recent years". Marius Nestor of Softpedia wrote: "it appears that there are a lot of known issues for this release, so if you're using Lubuntu 16.04 LTS (Xenial Xerus), we don't recommend upgrading to Lubuntu 16.10, or at least read about them before attempting an upgrade operation." Lubuntu 17.04 Lubuntu 17.04 was released on 13 April 2017. Like previous releases it uses LXDE and not LXQt, as the implementation of LXQt in Lubuntu was delayed once again, this time until 17.10. This release incorporated Linux Kernel 4.10, updated LXDE components, general bug fixes and new artwork. The recommended system requirements included 1 GB of RAM (512 MB minimum) and a minimum of a Pentium 4, Pentium M or AMD K8 processor. Joey Sneddon of OMG Ubuntu said of this release, that it is "compromised mainly of bug fixes and core app and system updates rather than screenshot-able new features." Lubuntu 17.10 Lubuntu 17.10 was released on 19 October 2017. This release was a general bug fix release as the project prepares for the implementation of LXQt. Also included were new versions of the LXDE components and new artwork. The minimum system requirements for this release remained 512 MB of RAM (with 1 GB recommended) and at least a Pentium 4, Pentium M or AMD K8 processor. An alternate version entitled Lubuntu Next 17.10 was provided with the LXQt 0.11.1 desktop. "While this release is available to install... we do NOT recommend that people use it in production unless they are aware of the somewhat critical bugs associated (which are more than 10 at the point of writing this). It also wouldn’t be a bad idea to be in contact with us as well," wrote Lubuntu developer Simon Quigley. Lubuntu 18.04 LTS Lubuntu 18.04 is a long term support version that was released on 26 April 2018. Like all past releases, it uses the LXDE desktop, although work continued to move towards deployment of the LXQt desktop, referred to as Lubuntu Next. 18.04 was the last release of Lubuntu to use the LXDE desktop as 18.10 moved to using LXQt. This release included new artwork, including a new star field wallpaper. System requirements for Lubuntu 18.04 LTS included a minimum of 1 GB of RAM, although 2 GB was recommended for better performance, plus a Pentium 4, Pentium M, or AMD K8 CPU or newer. The RAM requirements increased from Lubuntu 17.10. Point releases include 18.04.1 on 26 July 2018 and 18.04.2 on 14 February 2019. Lubuntu 18.10 In a 14 May 2018 announcement the project developers confirmed that Lubuntu would transition to the LXQt desktop for Lubuntu 18.10. It was released on 18 October 2018 and included LXQt. This transition was planned for after the release of Lubuntu 18.04 LTS, to allow testing and development over three regular releases before the first long term support version, Lubuntu 20.04 LTS, is released with LXQt. The project also changed its logo in early April 2018, in anticipation of this move. In transitioning to LXQt this release uses LXQt 0.13.0, based upon Qt 5.11.1. Applications include LibreOffice 6.1.1 office suite, the VLC media player 3.0.4 player, Discover Software Center 5.13.5 and FeatherPad 0.9.0 text editor. KDE's Falkon 3.0.1 had been beta tested as the default web browser, but was found to lack stability and was replaced with Firefox 63.0. The installer for 18.10 is the Calamares system installer, in place of the previous Ubiquity installer. Starting with this release the developers no longer make recommendations for minimum system requirements. In reviewing the beta version of 18.10 in May 2018, Marius Nestor of Softpedia wrote: "We took the first Lubuntu 18.10 daily build with LXQt for a test drive, and we have to say that we're impressed ... The layout is very simple, yet stylish with a sleek dark theme by default and a single panel at the bottom of the screen from where you can access everything you need ... we give it a five-star rating." Writing after the official release on 20 October 2018, Marius Nestor of Softpedia noted: "After many trials and tribulations, and a lot of hard work, the Lubuntu team finally managed to ship a release with the LXQt desktop environment by default instead of LXDE (Lightweight X11 Desktop Environment), which was used by default on all Lubuntu releases from the beginning of the project. We also believe LXQt is the future of the LXDE desktop environment, which uses old and soon deprecated technologies, so we welcome Lubuntu 18.10 (Cosmic Cuttlefish) with its shiny LXQt 0.13.0 desktop environment by default, built against the latest Qt 5.11.1 libraries and patched with upstream's improvements." In reviewing Lubuntu 18.10, DistroWatch's Jesse Smith wrote: "I have mixed feelings about this release of Lubuntu. On the one hand most of the features worked well. The distribution was easy to install, I liked the theme, and the operating system is pretty easy to use. There were a few aspects I didn't like, usually programs or settings modules I felt were overly complex or confusing compared to their counterparts on other distributions. For the most part though, Lubuntu does a nice job of being a capable, relatively lightweight distribution ... On the whole, I think the transition from LXDE to LXQt has gone smoothly. There are a few choices I didn't like, and a few I did, but mostly the changes were minor. I think most people will be able to make the leap between the two desktops fairly easily. I think a few settings modules still need polish and I'd like to see Discover replaced with just about any other modern software manager, but otherwise this felt like a graceful (and mostly positive) move from 18.04 to 18.10 and from LXDE to LXQt." A detailed review of Lubuntu 18.10, Mahmudin Asharin, writing in Ubuntu Buzz, found only a few faults in the release, in particular the network manager. He concluded, "For most users, I recommend Lubuntu 18.04 LTS instead for the sake of usability and support duration. For first timer who installed/want to install 18.10 LXQt, go ahead and you will get beautiful user interface and very nice experience, but I recommend you to use Wicd instead of default network manager. For LXQt desktop pursuer, Lubuntu 18.10 is a great example of LXQt system. Try it first." A review in Full Circle magazine noted, "Overall LXQt, as seen in Lubuntu 18.10, is ready for day-to-day use, while there is also still room for ongoing refinement. Introducing LXQt in Lubuntu 18.10 was a careful choice by the Lubuntu developers. Coming right after Lubuntu 18.04 LTS, the final LXDE release, it gives developers three "standard" releases to continue to polish LXQt before the first LTS release..." Lubuntu 19.04 This standard release was made on schedule on 18 April 2019. This release marked the first Lubuntu version without 32-bit support. Lubuntu developer Simon Quigley wrote in December 2018: This release featured LXQt 0.14.1, based upon Qt 5.12.2. It included working full-disk encryption, easier customization of the Calamares installer configurations by employing XDG configuration variables and Austrian keymapping. Minimum installation RAM was reduced to 500 MB. Other changes include Trash, Home, Computer, and Network icons added to the desktop, split view in PCManFM-Qt, exif data display in the image viewer, LXImage-Qt and touchpad settings fixed over 18.10. In a review Softpedia writer Marius Nestor described the use of LXQt 0.14.1, employed in Lubuntu 19.04, as "a much-improved and richer LXQt experience". A review in Full Circle magazine concluded, "Lubuntu 18.10 wasn’t ready for prime time, but 19.04 is. LXQt looks fresh and new, and everything works right from the installation; it even runs fine from a DVD live session. I didn’t find anything that needs fixing in 19.04. If not for the nine-month support period for this regular release, it could have been a long term support release, at least for the quality of the user experience and the lack of bugs." A review in FOSS Post by M. Hanny Subbagh in September 2019, entitled Lubuntu, A Once Great Distro, Is Falling Behind concluded "Most of the criticism you have seen in this article is coming from the LXQt desktop environment. It’s understandable that any new piece of software will have bugs/issues in the first few years of its life cycle, but the LXQt desktop still needs a long round of polished updated to make it match the other desktops such as GNOME, Cinnamon, XFCE and MATE. Meanwhile, if you are interested in trying Lubuntu, we recommend that you stick to the 18.04LTS version, which comes with LXDE." A review by Igor Ljubuncic in Dedoimedo concluded, "Lubuntu 19.04 Disco Dingo feels ... raw. Unfinished. Half-baked. It has some perfectly decent functionality, like networking, media and phone support, but then it also comes with rudimentary package management, a jumbled arsenal of programs, a desktop that is too difficult to manage and tame, plus identity crisis ... I struggled with the overall purpose, though. As impressive as the speed and lightness are, they are only small improvements over what Plasma offers. But then, Plasma is much easier to customize and tweak, it offers a coherent, consistent experience, and it feels modern and relevant. With Lubuntu, I had no connection, and using the distro felt like a chore. I had to fight the weird defaults to try to create an efficient setup, and I wasn't able to do achieve that. So I always go back to the question of investment versus benefit. Lubuntu feels too pricey for what it gives .. With Lubuntu, there needs to be more order, more consistency in how it works. At the moment, it's just a collection of ideas mashed together. While perfectly functional, it's not really fun. 6/10 ..." Lubuntu 19.10 This standard release was the last one before the 20.04 LTS release and arrived on 17 October 2019. This release brought new artwork, including new wallpaper. It uses LXQt 0.14.1, based upon Qt 5.12.4. A review in the February 2020 issue of Full Circle magazine, concluded, "Lubuntu 19.10 builds well upon the success of 19.04. The developers seem to be fixing things at a good clip and polishing it up for the next key release, the first LXQt LTS version, due out on 23 April 2020. The 19.10 release is bug-free enough to have been an LTS release itself and this bodes really well for the expected quality of the upcoming LTS." Lubuntu 20.04 LTS This release is the first Lubuntu long term support release that uses LXQt and was released on 23 April 2020. Lubuntu 20.04.1 LTS was released on 6 August 2020. Lubuntu 20.04 LTS used LXQt 0.14.1, based upon Qt 5.12.8 LTS. This release did not introduce many changes. It included a new software update notifier application. Called Update Notifier, it was developed by Hans Möller. The release included new wallpaper artwork as a result of a community contest held for the release. In a 27 April 2020 review in It's FOSS, Dimitrios Savvopoulos noted, "LXQt is not only for users with an older hardware but also for those who are seeking a simple and classic experience at their new machine." He added, "in daily use, Lubuntu 20.04 has proven to me completely trouble-free as every Ubuntu flavour in fact ... [the] Lubuntu team has successfully made the transition to a modern, still lightweight and minimal desktop environment. LXDE looks like [it has been] abandoned and it is a good thing to move away [from it] to an active project. A 29 May 2020 review in Full Circle magazine concluded, "Lubuntu 20.04 LTS completes the two-year development cycle, consisting of three standard releases leading to this LTS release. Overall this represents the culmination of a development project that commenced in 2014 to create a new Qt-based desktop for Lubuntu: LXQt. The process has taken much longer than was forecast six years ago, but was worth the wait. This first LTS release is stable, smooth, elegant, and a real joy to use. This is the best Lubuntu release so far." Lubuntu 20.10 This standard release was made available on 22 October 2020. On 16 August 2018, the Lubuntu development team announced plans to port Openbox to Mir in time for Lubuntu 20.10 to allow Lubuntu to move away from the X display server to an implementation of Wayland instead. This release used LXQt 0.15.0, based on Qt 5.14.2. Improvements include adding a tree view to show pending updates in the Lubuntu update notifier and an updated plymouth theme, with the default wallpaper an alternate one from the 20.04 wallpaper competition. In a rundown on the Ubuntu flavors, DeBugPoint noted, "Lubuntu 20.10 based on Groovy Gorilla ... is perfect for low-end hardware and lightweight systems while being stable [due to being based upon] Ubuntu". A review published on Christmas Day 2020 in Full Circle magazine concluded, "Lubuntu 20.10 introduces very little that is new over 20.04 LTS. I actually think this is a good sign, as 20.04 LTS is a superb operating system and doesn’t really need much improvement. If this development cycle leads to the next Lubuntu LTS version having just a few minor improvements over 20.04, then, personally, I will be very happy with the results. An updated version of FeatherPad would be nice by then, however." Lubuntu 21.04 Lubuntu 21.04 is a standard release, made on 22 April 2021. This version introduced LXQt 0.16.0 based upon Qt 5.15.2. A new application, LXQt Archiver 0.3.0, based on Engrampa, was included. There was also a new version of the Lubuntu update notifier that includes a tree view with packages and versions for upgrade listed, plus new artwork. A review in Full Circle magazine stated, "this second standard release in the development cycle leading to the next LTS includes just small, cautious changes. This is really how operating system development should be approached, particularly when you have a loyal user base who are happy with how everything works and are generally not demanding big changes." Lubuntu 21.10 Lubuntu 21.10 is a standard release that was released on 14 October 2021. This release uses the LXQt 0.17.0 desktop, based on Qt 5.15.2. Unlike Ubuntu 21.10, which moved to a snap package, Lubuntu retained the .deb package for the Firefox browser for this release. Applications Lubuntu LXDE The LXDE versions of Lubuntu (18.04 LTS and earlier) included the following applications: User applications: Abiword – word processor Audacious – music player Evince – PDF reader File-roller – archiver Firefox – web browser Galculator – calculator GDebi – package installer GNOME Software – package manager Gnumeric – spreadsheet Guvcview – webcam LightDM – log-in manager Light-Locker – screen locker MPlayer – video player mtPaint – graphics painting Pidgin – instant messenger and microblogging scrot – screenshot tool Simple Scan – scanning Sylpheed – email client Synaptic and Lubuntu Software Center – package managers Transmission – bittorrent client Update Manager Startup Disk Creator – USB ISO writer Wget – command line webpage downloader XChat – IRC Xfburn – CD burner Xpad – notetaking From LXDE: GPicView – image viewer Leafpad – text editor LXAppearance LXDE Common LXDM LXLauncher LXPanel LXRandr LXSession LXSession Edit LXShortCut LXTask LXTerminal Menu-Cache Openbox – window manager PCManFM – file manager Up to and including 18.04 LTS, Lubuntu also had access to the Ubuntu software repositories through the Lubuntu Software Center, the Synaptic package manager and APT allowing the installation of any applications available to Ubuntu. Lubuntu LXQt The LXQt versions of Lubuntu (18.10 and later) include the following applications: Internet Applications Firefox – web browser Qtransmission – bit torrent client Quassel – internet relay chat Bluedevil – bluetooth connector Trojitá – email client Office Applications LibreOffice – office suite LibreOffice Calc – spread sheet LibreOffice Impress – presentation LibreOffice Math – math formula writer LibreOffice Writer – word processor qpdfview – PDF viewer Graphics Applications ImageMagick – image manipulation lximage – image viewer ScreenGrab – screenshot tool Skanlite – scanning application Accessories 2048-qt – a lightweight implementation of the 2048 video game LXQt Archiver – archive utility Discover Software Center FeatherPad – lightweight text editor Kcalc – scientific calculator KDE partition manager Muon package manager Noblenote – note taking application PCManFM-Qt – file manager Qlipper – clipboard manager Sound and Video K3b – CD and DVD burning PulseAudio Volume Control VLC media player From 18.10, Lubuntu also has access to the Ubuntu software repositories through the Discover Software Center, the Synaptic package manager and APT allowing the installation of any applications available to Ubuntu. Table of releases Timeline See also Similar Linux distributions Manjaro Linux - a similar project based on Arch Linux with various desktops to choose from, including LXDE. Peppermint Linux OS - based on Lubuntu with Linux Mint's utilities. Other links Computer technology for developing areas Free-culture movement Linux user group List of Ubuntu-based distributions Open-source software Ubuntu Professional Certification References External links Lubuntu Documentation Lubuntu - Ubuntu Wiki 2010 software LXQt LXDE Operating system distributions bootable from read-only media Ubuntu derivatives Linux distributions
15901056
https://en.wikipedia.org/wiki/On-Device%20Portal
On-Device Portal
On-Device Portals (ODPs) allow mobile phone users to easily browse, purchase and use mobile content and services. An ODP platform enables operators to provide a consistent and branded on-device experience across their broadening portfolio of services and typically provides on-device catalogs of content for purchase, deep links to WAP portals, customer care functionality and rich media services such as full track music, TV and video. A key benefit of on-device portals is the ability to target the user rather than the device itself. This personalized service delivery, coupled with the ability to be updated on a frequent basis by the operator – refining services or adding new content – makes ODPs an attractive proposition that has already proven successful in encouraging users to access mobile content. By enhancing the user experience, ODPs improve customer satisfaction, reduce churn and in turn, drive higher data revenues. On-Device Portals generally offer the following applications: Offline portal, store front, home-screen replacement and reporting function: Offline Portal - An application that allows the user to browse content categories and preview content free of charge through pre-cached content that does not require a WAP connection and creates a mobile environment that blurs offline and online experiences. Store Front - A client-server application enabling the user to discover, preview and select cached content offline, before accessing it via a WAP link. Home-Screen Replacement - A dashboard function of the ODP client that replaces the handset's home screen to provide zero click access to customized mobile content and services. Reporting Function - User analytics that allow operators to measure, refine and improve the delivery of data services by providing visibility into the way subscribers use and interact with their mobile devices. On-Device Portals can be targeted to a specific mobile operating system or provide multi-platform support allowing operators to achieve a consistent, branded experience across of all devices whether they run on Symbian (Symbian OS), Microsoft (Windows Mobile), BREW (Binary Runtime Environment for Wireless), Java or Research in Motion (RIM) platforms. Some vendors have server-side components to their On-Device Portal offerings which are tied into operator billing systems that deliver features such as usage tracking and lifecycle management. On-Device Portals are not intended to replace the operating system (OS) or other aspects of the phone's interface. The term On-Device Portal was coined by the London-based ARCchart Research in 2001. In 2005, ARCchart estimated 10 million ODP-enabled units were in the marketplace. History of On-Device Portals On-Device Portal technology emerged in 2000, as operating systems and Java were beginning to be deployed on mobile handsets. One.Tel launched smart on device applications using a combination of Java and SIMtoolkit technology in May 2000. In 2002, four vendors — ActiveSky, SurfKitchen, Action Engine and Trigenix - began developing ODP offerings. In March 2003 at CeBIT O2 demonstrated an ActiveSky powered O2 branded On-Device portal that was both user and operator (remotely) reconfigurable, that same year KDDI deployed an ActiveSky based solution. In 2003, operators including O2, Sonofon and ONE began to deploy ODP solutions. From 2001 to 2005, it was central to many MVNO solutions, as a more cost effective and customer-centric alternative to handset software variants or SIM toolkits, Virtuser developed many trials of these, including a pre-blackberry email service that used Bluetooth in buildings. Since 2003, On-Device Portal technology has been embraced by mobile operators across the world to complement WAP-based mobile data services and help operators deliver on the promise of mobile data services. Carriers expected the launch of High-Speed Downlink Packet Access (HSDPA), in which they had invested heavily, to drive robust average revenue per user (ARPU) growth. But despite this investment, the mobile data services market still only represented 19% of global mobile service revenue in 2007. Several factors have been cited for the slow uptake in data services, but poor user experience, based on older browser-based WAP technology, is widely acknowledged as one of the leading reasons. The browser-based mobile experience is considered difficult and complex, often requiring a user to click multiple times in order to navigate from browsing to downloading or purchasing content. Separately, WAP's structure, which differs from HTML, has made it difficult for content developers to build efficient content channels. The On-device portal is also not without complications, supporting wide handset lists is a lot harder to manage than mobile web content, and updating major components is not just a case of re-publishing, but of redistributing applications. At the same time, while it is generally easy to test web content on a handful of browsers, portals need to be tested on nearly every handset which will be expected to use the on-device portal. Mobile User Experience for On-Device Portals As the market for mobile services has emerged, it has focused on technology until early 2008. But handset vendors and operators are moving quickly from technology to focus on the user experience as the best way to improve market penetration. “When done properly, these experiences understand the task, the environment, and the audience.” Data is still being collected as companies analyze how effective greater emphasis on user experience proves to be for operators. Orange Group, for example, doubled its download data ARPU after deploying a portal on its Signature devices. On-Device Portal User Categories On-Device Portals target four main user categories: Consumers, who demand an improved interface for their mobile experience Operators, who want to improve ARPU with richer media content Content and media providers who want to monetize their offerings in the mobile world Handset manufacturers, who want technology frameworks to help customize handsets for their customers Consumers who want to extend their web based social networking experience onto the mobile device (Facebook, BeBo, MySpace etc.) or share / back-up content from their mobile device to the web (Nokia Ovi, Flikr mobile etc.) The Future for On-Device Portals ARCchart expects that number of On-Device Portals licenses to grow to 1.1 billion by 2009. As the ODP market matures it is expected to move beyond mass market services such as mobile TV, music and content catalogs to provide an experience where the subscriber can choose from a broad range of services and content, personalized to their needs. This change is being driven by operators’ desires to emulate the success of Web 2.0 by being able to offer a broader and more personalized range of services to the user. This shift in operators online portal strategies increases the emphasis on operators being able to quickly launch, measure and refine services, while offering users an intuitive way of browsing and searching a large catalogue of services. Some vendors have started to incorporate the kind of functionality needed to support mobile Web 2.0 and as the market matures, idle screen access, mobile widget platforms and the ability to support a broad range of operator business models (subscription, transaction and advertising driven) will become imperative for all ODP solutions. See also 3G Average Revenue Per User Advanced Mobile Phone System GSM Handheld Device Markup Language High-Speed Downlink Packet Access High-Speed Uplink Packet Access Java programming language Location-based service Mobile advertising Mobile device Mobile development Mobile network operator Mobile phone Mobile TV Personal digital assistant Qualcomm Brew (Binary Runtime Environment for Wireless - BREW) Short message service Smartphone Widget WiMAX Wireless Application Protocol Wireless internet service provider References Mobile software
35059114
https://en.wikipedia.org/wiki/ISM%20Canada
ISM Canada
ISM is an information technology service company based in Regina, Saskatchewan. The main company office is currently located in the ISM Building which was built by the University of Regina in 1991 to suit ISM Canada's needs. The company is a wholly owned subsidiary of Kyndryl Canada Limited, that serves both public and private sector organizations. ISM is headquartered in Regina, Saskatchewan and has offices in Saskatoon - Saskatchewan, Burnaby - British Columbia, and Victoria - British Columbia, but provides services to customers all across Canada. ISM is one of Canada's Top 100 Employers (2019), Saskatchewan's top 100 employers (2019), and Canada's top family-friendly employers (2019). History 1973 The Saskatchewan Computer Utility Corporation (SaskCOMP) was established in April 1973 under General Manager L.T.Holmes SaskCOMP was formed through the acquisition of the Systems Centre Branch of the Department of Government Services and Saskatchewan Power Corporation Computer Services. The corporations initial objective was to consolidate and combine computer services for provincially-funded institutions 1974 Further expansion of SaskCOMP - with the acquisition of the University of Saskatchewan, Saskatoon Campus Computation Centre and the University of Saskatchewan, Regina Campus Computer Centre. SaskCOMP's major new customers were the City of Prince Albert and the Saskatchewan Government Insurance office 1975–1980 The central theme for SaskCOMP was improved ease of access to computers. Revenue grew by 16% overall and software associated with new services increased four-fold and was expected to double again in 1976. In 1976, Norm E.Glassel took over as General Manager of SaskCOMP and a comprehensive mandate was formed for the corporation. The computers at the three main centres in PA, Saskatoon and Regina were linked via telecommunications lines to enable SaskCOMP customers at any one centre to use the services of another. SaskCOMP created a Minicomputer Division to provide minicomputer services to customers as readily as large-scale computers In 1977, SaskCOMP installed the IBM System/370 Model 168 computer to replace a model 158 computer. The Model 158 computer was sold for $640,000. In 1978 SaskCOMP had grown to become the twelfth largest service bureau in Canada! In 1979, SaskCOMP was part of a newly formed committee with the Department of Education and the Saskatchewan Teacher's Federation to promote the effective use of - and guidelines for the installation of - microcomputers in primary and secondary schools. In 1980, SaskCOMP moved into the new Galleria building at Innovation Place Research Park in Saskatoon. 1981–1990 The personal computer revolution began. SaskCOMP was leasing IBM's largest system and became the 8th largest computer operation in Canada. SaskCOMP rates were the lowest in the Canadian service bureau industry. In 1983, SaskCOMP celebrated its 10th anniversary!Efforts continued to focus on data security with new security policies, dedicated security employees and secure storage. SaskCOMP increased the CPU capacity by 40% at Regina Regional Centre by upgrading equipment through a contract with IBM. Affirmative action was introduced to attract more target group employees to SaskCOMP. In 1986, SaskCOMP deployed the first Disaster Recovery Service in Western Canada. Gerald Thom was named the President of SaskCOMP. Remote control centres were established to provide access over high-speed lines. A new corporate registry was established for use by the legal profession to search for information on registered and non-profit companies. SaskCOMP upgraded technology with a new IBM 3090-200E processor installation. In 1988, SaskCOMP became Westbridge Group. Four provincially owned companies merged, and were privatized. A new direction in data storage was implemented with "tomorrow's technology in mind." In 1989, Westbridge shifted to integrated technology services. Len McCurdy was named president. Westbridge was listed as a penny stock on the Toronto Stock Exchange and every employee got 100 shares. At this time, the corporation nearly became insolvent due to investment in Mr. McCurdy's previous computer leasing company, just as computer leasing was becoming unprofitable. 1991–2000 ISM Information Systems Management Corporation was formed through the merger of Westbridge Computer Corp and STM Systems Corp. The official opening of the new head office at 1 Research Drive, Regina. IBM assumed a controlling financial interest in ISM Canada. IBM outsourced its internal computer operations (host and client-server systems) to ISM. ISM ranked as one of the top 500 Canadian companies, moving up almost 300 places in the survey by Commerce magazine. In 1995, ISM Information Systems Management Corporation becomes 100% owned by IBM. ISM sponsored the Grey Cup 1995 in Regina: "Huddle Up in Saskatchewan." In 1997, major organizational changes occurred with the creation of IBM Global Services. IBM Canada formed Regina-based ISM Canada (ISM Information Systems Management Canada Corporation) in 2000. 2021 ISM became a subsidiary of Kyndryl Canada Limited on September 1, 2021 References 1973 establishments in Saskatchewan Companies based in Regina, Saskatchewan IBM acquisitions IBM subsidiaries Canadian companies established in 1973 Software companies established in 1973 Software companies of Canada 1995 mergers and acquisitions Canadian subsidiaries of foreign companies
54498705
https://en.wikipedia.org/wiki/Ashar%20Aziz
Ashar Aziz
Ashar Aziz (; born 1959) is a Pakistani-American electrical engineer, business executive and philanthropist. He is best known as the founder of Silicon Valley-based cybersecurity company FireEye. A former billionaire, Aziz had an estimated net worth of over $233 million as of 2015. Early life and education Aziz was born in Karachi in 1959 and grew up in Islamabad, Pakistan. He arrived in the United States as a student, having gained admission into the Massachusetts Institute of Technology School of Engineering (MIT). Prior to entering MIT, he completed two years of foundation studies at the Middle East Technical University (METU) in Turkey during the mid-1970s. He graduated from MIT with a Bachelor of Science in electrical engineering in 1981, followed by a Master of Science in computer science from the University of California, Berkeley, where he was also granted a Regents' Fellowship. Career Aziz worked as an engineer at Sun Microsystems for twelve years, specialising in networking, network security and system design. He also served as chief technology officer of the N1 program at Sun. In 1999, he founded his first startup company Terraspring Inc., which focused on data center automization and virtualization. The business was bought by Sun Microsystems in 2002, following the stock market crash of 2001. At around the same time, Aziz began studying the risks stemming from highly stealthy and self-propagating malware systems, and the threats they posed to future cybersecurity infrastructures – a problem he first discovered in the U.S. Department of Defense archives. In 2004, he founded FireEye, having become convinced that he could provide the solutions to these emerging cybersecurity challenges. He initially worked out of his home for 80 to 100 hours a week, with only $4,000 in his personal savings. Initially, most of his technical team comprised engineers and technology professionals he knew from his previous startup. Based in Milpitas, California, the company today provides both software and cloud-based anti-malware products. According to the MIT Department of Electrical Engineering and Computer Science, "the core of the FireEye platform is a virtual execution engine, complemented by dynamic threat intelligence, to identify and block cyber attacks in real time and across the different stages of an attack life cycle." Describing FireEye in 2012, Aziz stated that the company's objective was to strengthen the "security and key infrastructure that is pervasive across financial, government and credit card infrastructure to protect from three very important threats: crime, espionage, and warfare." Aziz served as FireEye's chief executive officer from 2004 to 2012, as well as remaining its chief strategy officer, chief technology officer and board vice chairman. In December 2012, Aziz stepped down as CEO and former McAfee CEO David DeWalt was appointed to the position. DeWalt was recruited in order to prepare the company for an initial public offering (IPO). The following year, FireEye raised an additional $50 million in venture capital, bringing its total funding to $85 million. In September 2013, FireEye went public, raising $300 million. Aziz's shares in FireEye skyrocketed as the company acquired an asset valuation of over US$1 billion. According to Forbes, he owned 11 million shares in the company and his personal net worth had increased to just over US$1 billion. However, the Business Insider explained the appreciation in Aziz's net worth was temporary due to the high fluctations in share prices, dropping his stake worth to under $1 billion once prices dropped. According to The Wall Street Journal, FireEye was the second-best performing firm in the U.S. that went public in 2013. Aziz was inducted into the Forbes' list of billionaires for 2014. As the inventor of FireEye's malware protection system, Aziz currently holds over 80 patents relating to various cybersecurity technologies, including in the fields of network security, cryptography, and data center virtualization. He is recognised as the technical visionary and pioneer behind FireEye's core technology. In August 2016, it was announced that Aziz had tendered his resignation from FireEye in order to spend more time with his family, and focus on solar energy distribution projects in developing countries. A partner of FireEye described the news as "disconcerting" for FireEye's future and its workforce, and predicted his exit would accelerate discussions on a potential sale of the company. However, a FireEye source dismissed this notion, stating the company would operate under its new CEO and continue to provide solutions using its multi-vector virtual execution (MVX) technology. The source also clarified that Aziz felt he was leaving the company in "good hands", and that he was still a shareholder in FireEye. Philanthropy Aziz is the founding CEO of SkyElectric Inc., a company which seeks to provide affordable and sustainable solar energy solutions. In February 2017, he pitched solar energy as the solution to Pakistan's electricity crisis and launched his company's operations in the country. He is a member of the board of governors of Namal Institute, founded by Imran Khan. In January 2020, he inaugurated the Nisar Aziz Agri-Tech Center at Namal Institute at Namal. Named after his mother, the center would assist in the development of a "robust agricultural economy through technology" by promoting low-cost agricultural practices, and providing agricultural education to farmers in Pakistan. Personal life Aziz's father, Asghar Butt, was a journalist, writer and playwright, who served as the deputy editor of the Pakistani broadsheet The Nation. His father died aged 91 at his hometown Lahore, Punjab, on 13 November 2012. Aziz's mother, Nisar Aziz, was an eminent Urdu novelist and literary figure who belonged to a Pashtun family from Mardan. She died aged 93 on 7 February 2020. The economist and politician Sartaj Aziz is his maternal uncle. The Nation'''s editor-in-chief, Majid Nizami, was his father's brother-in-law and thus his uncle. Ashar has one brother, Ahmar Aziz Butt, who is a doctor. Aziz is a recipient of the Ernst & Young Entrepreneur of the Year Award. In 2018, he was presented the civil award Sitara-i-Imtiaz'' by the government of Pakistan for his contributions to the IT industry. References 1959 births Living people American billionaires American businesspeople of Pakistani descent American chief technology officers American computer scientists American electrical engineers American patent holders American people of Kashmiri descent American people of Pashtun descent American people of Punjabi descent American philanthropists American technology chief executives American technology company founders Computer engineers Former billionaires MIT School of Engineering alumni Middle East Technical University alumni Pakistani billionaires Pakistani chief executives Pakistani computer scientists Pakistani electrical engineers Pakistani emigrants to the United States Pakistani expatriates in Turkey Pakistani philanthropists Pakistani people of Kashmiri descent People associated with solar power People from Islamabad People with acquired American citizenship Recipients of Sitara-i-Imtiaz Sun Microsystems people UC Berkeley College of Engineering alumni
57791000
https://en.wikipedia.org/wiki/CR2%20%28company%29
CR2 (company)
CR2 is an Irish-owned banking software company which provides mobile, internet and ATM financial service technology to more than 100 retail banks across Africa, the Middle East and Asia. Headquartered in Dublin, Ireland, the company has offices in Dubai, London, Cairo, Amman, Bengaluru, Lagos, Johannesburg, Singapore and Perth. Its customers include ANZ, Barclays, Standard Chartered, Botswana Savings Bank, Jordanian Bank al Etihad, pan-African bank Orabank, and Nigeria’s Access Bank plc and Diamond Bank. History Founded in 1997 by two former Kindle Banking Systems directors, and part of the fintech portfolio of Ireland's state economic development agency Enterprise Ireland, CR2 is an Irish software company that provides banking products and services to financial institutions in emerging markets. By 2000, after a series of venture capital cash injections and investment from directors, and the acquisition of London-based Interlink for £10m and was valued at more than $27m. With investors backing CR2 to the value of €34.5 million since its foundation in 1997, the company emerged from the Dot-com bubble crash of late 2000/early 2001, to report an increase in revenue by posting a turnover of €16.4m in 2002. An additional R&D funding round to the value of €10m was completed in 2005. By 2012, the company was placed on the market with a value of more than €60m. BankWorld CR2 is best known for its digital banking platform, BankWorld. It features mobile and internet technology for retail and corporate customers; integrated ATM technology with switch, card management, production, monitoring, screen design, support tools as well as POS (or point of sale) acquiring and payment systems; SMS push and pull and USSD; with APIs that enable third-party channel integration and innovation. Awards and recognition Ovum Ltd, the independent, analyst and consultancy firm, recommends CR2's digital product offering in its published report of August 2016. CR2 recognised for delivering ‘Excellence in Channel Banking Software Solutions’ at the New Age Banking Awards 2018 References Software companies of Ireland Financial software companies Software companies established in 1997 Banking software companies Companies of the Republic of Ireland
22815649
https://en.wikipedia.org/wiki/Nalini%20Venkatasubramanian
Nalini Venkatasubramanian
Nalini Venkatasubramanian is a Professor of Computer Science in the Donald Bren School of Information and Computer Sciences at the University of California, Irvine. She is known for her work in effective management and utilization of resources in the evolving global information infrastructure. Her research interests are Multimedia Computing, Networked and Distributed Systems, Internet technologies and Applications, Ubiquitous Computing and Urban Crisis Responses. Dr. Venkatasubramanian's research focuses on enabling effective management and utilization of resources in the evolving global information infrastructure. She also addresses the problem of composing resource management services in distributed systems. Born and raised in Bangalore, she received her Ph.D. in Computer Science from the University of Illinois, Urbana-Champaign in 1998. From 1991 to 1998, she was a member of technical staff and software designer engineer for Hewlett-Packard. In 1998, she joined UC Irvine as an Assistant Professor of Computer Science. Awards NSF (National Science Foundation) Career Award, 1999. Networking Research Program. Teaching Excellence Award, Division of Undergraduate Education, 2002, University of California, Irvine Best Paper Award, IEEE Consumer Communications and Networking (CCN) Conference, Las Vegas, Jan 8th - 10th, 2006. Deans Award for Undergraduate Teaching, University of California, Irvine, 2014 References External links American computer scientists Year of birth missing (living people) Living people Indian emigrants to the United States University of California, Irvine faculty University of Illinois alumni People from Irvine, California Kannada people American women computer scientists 21st-century American women
1995536
https://en.wikipedia.org/wiki/Food%20Force
Food Force
Food Force is an educational game published by the United Nations World Food Programme (WFP) in 2005. Due to its content, it is considered a serious game (game with educational purpose). Players take on missions to distribute food in a famine-affected country and to help it to recover and become self-sufficient again. At the same time they learn about hunger in the real world and the WFP's work to prevent it. In 2011, a new, socially interactive game was launched in November by the World Food Programme in partnership with Konami. The game, which takes place entirely on Facebook, is amongst the first games ever to feature a real world impact through virtual goods. Gameplay The game takes place on the fictional island of Sheylan in the Indian Ocean, which is suffering from both drought and civil war. The player's character is a rookie who has joined a team of UN experts, including a nutritionist, a logistics officer, a pilot, an appeals officer, and the director of food purchasing. The game contains six missions: Air surveillance: Locating hungry citizens in a helicopter Energy pacs: Produce a balanced diet of rice, cooking oil, beans, sugar and salt within a budget of only 30 cents per day Food drop: drop food supplies in a target zone, while compensating for wind direction Locate and dispatch: co-ordinate supplies of purchased and donated food from around the world Food run: Lead a food convoy to a depot, dealing with hazards like landmines and roadblocks Future farming: Use food aid to help a village develop over 10 years, by investing in it carefully with nutrition training, schooling, "food for work" and HIV/AIDS treatment Most of the gameplay is arcade oriented, with time-limited sequences. All six missions could be played through in under an hour, though players might replay the individual missions, as their high scores could be uploaded online for worldwide comparison with other players, until another game, under the same name, was made by the WFP and Konami. Availability The game was freely downloadable, but with the release of a game under the same name by WFP, the game's website has been shut down along with all of the official download links. The game is still mirrored on various websites, but there is no official download link. It had versions for the Windows and Macintosh. Linux users could use Wine to install the Windows version of the game and the QuickTime bundled with the package. The Macintosh version of the game is PowerPC only and as such, the game could only be run on PowerPC Macs with Mac OS X Jaguar or higher or on Intel powered Macs running Mac OS X Tiger to Mac OS X Snow Leopard. Users are encouraged to burn the game onto CDs and distribute it to friends. Developed by the British studio Playerthree and the Italian company Deepend, the program uses Macromedia for gameplay and QuickTime for video. The Food Force website also had lesson plans for teachers, reports on recent WFP work, a high score table and other features. Food Force 2 Food Force 2, based on Food Force, has been developed as free software under the terms of version 3 of the GNU General Public License. It is cross-platform as it is written in the Python programming language, and runs on platforms including the One Laptop Per Child XO and the Sugar desktop environment. Food Force 2 is available for download on Linux, Windows and Mac OS X platforms. References External links Food Force Facebook official site Food Force official website Food Force official website (Latinamerican Edition) Worldview A teacher's resource on the Food Force website Food Force - Bowl 2006 Food Force on OLPC FoodForce2 site Friends of the World Food Program United States agency helping raise funds and awareness about global hunger World Food Program UN Agency Pictures for truth A game for Amnesty International released in 2008 memellon Developers of the Latinamerican Edition released on October 2008 Helicopter video games Humanitarian video games Hunger Advergames 2005 video games Windows games Classic Mac OS games Linux games Video games about food and drink Video games developed in Italy Video games developed in the United Kingdom Freeware games
26086851
https://en.wikipedia.org/wiki/Tape%20label
Tape label
Tape labels are identifiers given to volumes of magnetic tape. There are two kinds of tape labels. The first is a label applied to the exterior of tape cartridge or reel. The second is data recorded on the tape itself. Visual labels Visual labels are human readable. The labels have evolved to have barcodes that can be read by tape libraries. Reading the barcode label is often much faster than mounting the tape volume and reading the identification information written on the media. To read the bar code, the tape library need only position the volume in front of the bar code reader. Magnetic labels Originally, 7- and 9-track data tapes only had human readable labels on them (i.e. as far as the operating system was concerned they were unlabeled). Somebody wishing to use a particular tape would ask the operator to mount that tape; the operator would look at the human readable label, mount it on a tape drive, and then tell the operating system which drive contained the tape of interest. That had some drawbacks: the operator might mount the wrong tape by mistake, or he might type in the wrong identification. A solution was to record some tape identification information on the tape itself in a standard format. This metadata allowed the operating system to quickly recognize a volume and assign it to the program that wanted to use it. The operating system would notice that a tape drive came online, so it would try to read the first block of information on the tape. If that was a volume label, then the operating system could determine what to do with it. Some computer systems used similar labels on other serial media, for example punched card decks and sometimes line printer output. IBM tape labels IBM tape labels with VOL/HDR/EOV/EOF records. IBM tape labels on 9-track tapes use EBCDIC character encoding; 7-track tapes (now obsolete) used BCD encoding. ANSI tape labels ANSI/ISO/ECMA tape labels are similar to IBM tape labels but use the ASCII character set on 9-track tape. When originally defined in the mid-1960s, they used BCD on 7-track tape. Burroughs tape labels The Burroughs MCP running on the B5000 was one of the earliest systems to automatically read tape labels. When designed in 1961 it used a proprietary format coded in BCD (strictly, Burroughs Interchange Code or BIC), but was later able to read standard 7-track ANSI (then styled USASI) labels. RFID tags Some tapes (e.g., later versions of Linear Tape-Open and Advanced Intelligent Tape) are using RFID tags. Often these RFID tags include tape metadata such as data locations, number of tape errors encountered, number of times the entire tape was read or written, etc. See also Microsoft Tape Format References External links ECMA-13, File Structure and Labelling of Magnetic Tapes for Information Interchange, 4th ed, December 1985. http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp?topic=/rzatb/vdefn.htm http://techpubs.sgi.com/library/tpl/cgi-bin/getdoc.cgi?coll=linux&db=bks&fname=/SGI_EndUser/TMF_UG/ch02.html https://it-dep-fio-ds.web.cern.ch/it-dep-fio-ds/Documentation/tapedrive/labels.html Computer storage tape media
1936321
https://en.wikipedia.org/wiki/IRIX%20Interactive%20Desktop
IRIX Interactive Desktop
IRIX Interactive Desktop (formerly called Indigo Magic Desktop) is a desktop environment normally used as the default desktop on Silicon Graphics workstations running IRIX. The IRIX Interactive Desktop uses the Motif widget toolkit on top of the X Window System found on most Unix systems. The default window manager on the IRIX Interactive Desktop is 4Dwm. History IRIS Workspace is SGI's first desktop environment in IRIX versions 3.0 through 5.0. This was succeeded by Indigo Magic Desktop, introduced in 1993 with IRIX 5.1 on the Indy multimedia workstation. These are some of the first default desktop environments to be standard on a UNIX computer system. The Indigo Magic Desktop was renamed to IRIX Interactive Desktop with the release of IRIX 6.5 in 1998. Eric Masson worked through a license with SGI to recreate the IRIX Interactive Desktop for Linux, in a project called MaXX Interactive Desktop. Features Indigo Magic Desktop is intended to promote easy access to multimedia functionality, and provide user-customizable environments. The environment is based on audio cues for the start and finish of an operation, intelligent windows to track recently visited directories, and multiple virtual desktops grouped by function. IRIX Interactive Desktop has two primary components: the System Manager and the Toolchest. The System Manager is the main utility for desktop and system configuration. The Toolchest is a menu (normally located on the desktop) that shows which applications are installed on a particular Silicon Graphics workstation. When not in use, program windows minimize onto the desktop in a small rectangular shape, while the file manager that also displays the desktop uses a distinct vector graphic icons (utilizing the IconSmith .fti file format). Reception UNIX Review magazine in March 1994 gave Indigo Magic Desktop a mixed review, with an overall rating of "superior". See also 4Sight IRIS Workspace MEX Xsgi References External links Toastytech IRIX Interactive Desktop screenshots SGIstuff : Software : IRIX Versions MaXX Interactive Desktop (Old Website) Indigo Magic User Interface Guidelines Desktop environments IRIX software Software that uses Motif (software)
30827957
https://en.wikipedia.org/wiki/Jeffrey%20Carr
Jeffrey Carr
Jeffrey Carr is a cybersecurity author, researcher, entrepreneur and consultant, who focuses on cyber warfare. Career In 2008, Carr founded Project Grey Goose, a crowd-sourced open-source intelligence effort to attribute major cyber attacks. The Project soliticited the expertise of vetted volunteers, while seeking to filter out non-experts and cyber criminals. The Project's first area of research was the campaign of cyberattacks during the Russo-Georgian War. In 2011, Carr created the Suits and Spooks conference series, which offered a private forum for intelligence veterans to meet with technologists, academics, hackers, and business executives. The forum was acquired by Wired Business Media in 2014. Carr was the founder of now-defunct cybersecurity firms Taia Global Ltd (also founder) and GreyLogic. He later wrote about the lessons he gained from their failures. Carr has lectured on cybersecurity issues at the Defense Intelligence Agency, U.S. Army War College, Air Force Institute of Technology, NATO’s CCDCOE Conference on Cyber Conflict, and DEF CON. Writing and research He is the author of Inside Cyber Warfare: Mapping the Cyber Underworld (O'Reilly Media 2009, 2011), which analyzed cyber conflicts from 2002 until 2009. In Cyber Warfare, Carr argued that international cyber attacks are predominantly deployed by non-state actors, who are sometimes encouraged and tolerated by state entities. Alternately, as Carr later told The Christian Science Monitor, it is private IT infrastructure, rather than government policy, that lies at the heart of US vulnerability to international cyber attacks. Jeffrey Carr has blogged about cyber security and warfare at Intelfusion.net (until September 1, 2010) and Forbes' The Firewall. Carr said he had quit The Firewall in protest, after his post on Yuri Milner's relationship to the Russian FSB was taken down by Forbes at the request of Milner's lawyer. In March 2017, Carr stated there was growing doubt in the computer security industry regarding the narrative of Russian state sponsorship of hacks associated with the 2016 US elections. Carr stated that, because the FBI relied on forensic investigations by global cybersecurity consultancy CrowdStrike, "everyone else is relying on information they provided." Several notable competitors of CrowdStrike, including Symantec and FireEye examined the underlying data and endorsed CrowdStrike's conclusions. Professional publications “The Classification of Valuable Data in an Assumption of Breach Paradigm”, Georgetown Journal of International Affairs, March 2014 “The Misunderstood Acronym: Why Cyber Weapons aren’t WMD”, Bulletin of the Atomic Scientists, Sept 1, 2013, Vol, 69, No. 5, p. 3237 “Intelligence Preparation of the Information and Communications Environment”, Air & Space Power Journal, 2012, Vol 24, No. 3 References External links Jeffrey Carr's Digital Dao Blog Living people O'Reilly writers Year of birth missing (living people)
33231878
https://en.wikipedia.org/wiki/Virtual%20Computer
Virtual Computer
Founded by Alex Vasilevsky, Virtual Computer is a venture-backed software company in the Boston area that produces desktop virtualization products, which combine centralized management with local execution on a hypervisor running on PCs. Virtual Computer has developed a type-1 hypervisor that runs directly on end-user PCs, delivering native PC performance and mobility. By running the workload on the PC, Virtual Computer enables companies to have centralized management without servers, storage, and networking required for server-hosted VDI. The technology supports shared image management, enabling an IT professional to manage thousands of desktops and laptops the same way that they would manage one. History Virtual Computer was founded in November 2007 by the founder of Virtual Iron, Alex Vasilevsky, and the former co-founder and General Manager of Guardent, Dan McCall. The company was backed by investors including Highland Capital Partners and Flybridge Capital Partners. Funding Virtual Computer has received over $20 million in funding from Highland Capital Partners, Flybridge Capital, and Citrix. Partnerships In March 2011, Virtual Computer announced its collaboration with Lenovo to optimize Virtual Computer's NxTop client virtualization and management solution on Lenovo's business-oriented systems. These include select models of Lenovo's ThinkCentre A and M series desktops and ThinkPad T and X series laptops. Virtual Computer's NxTop desktop virtualization solution is compatible with the previous three generations of Intel Core vPro processor family, including the Intel Core vPro i3, i5, and i7 processors, as well as all platforms with Intel Virtualization Technology. Product Virtual Computer's product is the NxTop® Enterprise. It released the fourth version 4 of its client hypervisor, NxTop®4 Enterprise, in August 2011. NxTop combines a centralized management system with an enhanced Xen-based client hypervisor to create a single platform for any combination of local desktops, remote VDI sessions, and server- and cloud-based applications. NxTop provides IT control of the end-point through advanced management capabilities in NxTop Center. Virtual Computer's core technologies include a managed client-hypervisor that runs on a bare-metal PC across a wide set of hardware platforms; a layering technology that pulls apart applications, profiles, and data so they can be managed independently; and a mechanism to tightly integrate a centralized management system with a type-1 managed hypervisor running on end-user PCs. References External links Virtual Computer Official Website Virtual Computer uses virtualisation to manage laptops Client-side virtualization goes bare metal Virtual Computer collaborates with Lenovo-NxTop the industry’s best enterprise-class type 1 client hypervisor? Software companies of the United States
24809612
https://en.wikipedia.org/wiki/Gihan%20Wikramanayake
Gihan Wikramanayake
Gihan Nilendra Wikramanayake, FBCS, CITP (17 January 1960 – 5 January 2018) was a Sri Lankan academic. He was a Senior Professor in Computer Science and attached to the Department of Information Systems Engineering. He was the Director of the University of Colombo School of Computing (UCSC) from 24 May 2010 to 31 May 2016. He was the Head of the Department of Information Systems Engineering from September 2002 to September 2005. He also served as the Deputy Director of the UCSC from 2003-2006. Education Educated at St. Aloysius' College (Galle) and Royal College Colombo, he graduated from the University of Colombo with a BSc in Statistics & Mathematics with First Class Honours in 1984 and went on to gain a MSc and PhD in Computer Science from the University of Wales, College of Cardiff in 1989 and 1996. Academic career In 1984 he joined the Computer Centre of the University of Colombo as a trainee programmer, he moved to the Department of Statistics and Computer Science as a researcher before becoming an Assistant Lecturer. In 1990 he became a Lecturer and Senior Lecturer in 1996. Moving to the new Department of Computer Science in 2000 he became its acting Head in 2002. In 2002 he became the Head, Department of Information Systems Engineering, UCSC and Deputy Director, UCSC in 2004. From 2007 to 2008 he was the acting Head, Department of Communication and Media Technologies. He was also a visiting scholar of University of New Mexico, Stockholm University and Umeå University. In 2010 he became the Director of the UCSC and a Professor of Computer Science. He served two terms as director UCSC. A fellow of the British Computer Society he served as the Chairman of the BCS Sri Lanka Section for five years from 2007/08, 2009/10, 2010/11, 2011/12 and 2014/15. He also served as the chairman of UCSC owned company called Theekshana from 2010–2016. He also served as a board member of LEARN from 2010–2016. He was a member of the Sri Lanka Informatics Olympiad since 1998 and was involved in organizing IOI training since 1998. He took part as the Deputy Team Leader in IOI 1999 in Turkey and in IOI in 2001–2007, 2009–2010, 2014 and 2016 as the team leader. He served as a Sri Lankan judge at APICTA from 2003–2007, 2009–2014, 2016 and was the chief judge in APICTA 2015. Death On 5 January 2018, Wikramanayake died at the age of 57 from motor neurone disease. References External links University of Colombo School of Computing BCS Sri Lanka Section Committee 1960 births 2018 deaths Sri Lankan academic administrators Sri Lankan computer scientists Information systems researchers Alumni of the University of Colombo Alumni of the University of Wales Alumni of Royal College, Colombo Fellows of the British Computer Society Alumni of St. Aloysius' College, Galle
2425358
https://en.wikipedia.org/wiki/University%20of%20Aizu
University of Aizu
The University of Aizu () in Aizuwakamatsu, Japan, is the first university dedicated to computer science engineering in Japan. It has about 1,100 students enrolled in its undergraduate and graduate programs. The university's motto is "to advance knowledge for humanity" and it carries out significant research in computer science. Description The University of Aizu is in Aizuwakamatsu city in Fukushima prefecture, Japan. The university specializes in computer science education, both hardware and software, at the undergraduate and postgraduate levels. It is known for its open access to computers; there is a 1:1 ratio of computers to students, and students have access to a computer 24 hours a day. Additionally, the computers are replaced every three years, so the available computer equipment is always recent technology. In addition to computer science, English language education is an important aspect of the University of Aizu. The university is officially bilingual and all official meetings and correspondence are interpreted and translated. Approximately 40% of professors come from overseas, including countries such as Vietnam, India, South Korea, Canada, United States, Russia and China. Not only do students enroll in English courses throughout their undergraduate programs, many of their computer science courses are taught in English. Students are required to write a graduation thesis in English. The university has international students at the masters and doctoral levels. In 2005, the University of Aizu was chosen by the Japanese government, along with 20 other universities, to be a national center responsible for the improvement of international education. In this role, the University of Aizu carries out computer science research in collaboration with foreign universities such as Shanghai University, China and Saint Petersburg State University, Russia. The University of Aizu has research ties with many universities overseas, has international staff, and accepts students from abroad. The president is MIYAZAKI Toshiaki. Top Global University In September 2014, the Ministry of Education, Culture, Sports, Science and Technology (MEXT) selected the University of Aizu as one among 37 universities for the Top Global University Project. Campus The university is located on a 20 hectare campus in Aizu-Wakamatsu, Fukushima. The university has a student dormitory, sports and swimming facilities, and numerous playing fields. Faculties and graduate schools Computer Science and Engineering Information Technology and Project Management Graduate fields of study Computer Devices Information Systems Computer Network Systems Recognition and the Human Interface Algorithms and Theoretical Computer Science Computer Organization and Parallel Processing Synthetic Worlds, Virtual Reality and Multimedia Knowledge Engineering, Cybernetics and Software Systems Software Engineering and Information Technology Research institutes, centers, and facilities University Business Innovation Center Center for Language Research References External links University of Aizu--- Official Website Educational institutions established in 1993 Public universities in Japan Universities and colleges in Fukushima Prefecture 1993 establishments in Japan Aizuwakamatsu
58331015
https://en.wikipedia.org/wiki/Misfortune%20Cookie%20%28computers%29
Misfortune Cookie (computers)
Misfortune Cookie is computer software vulnerability of certain set of network routers' firmware which found be leveraged by an attacker to gain access remotely. Tyne CVSS rating for this vulnerability is rated between 9.8 and 10 on the scale of 10. The attacker in this scenario sends a crafted HTTP cookie attribute to the vulnerable system's (network router) web-management portal where the attacker's content overwrites the device memory. The contents of the cookie act as command to the router which then abides by the commands. This results in arbitrary code execution. This vulnerability was discovered in early 2000s but did not emerge publicly until 2014 when security researchers from Israeli security firm checkpoint made a public disclosure. The vulnerability still persists in over 1 million devices accessible over internet and total of about 12 million devices. This includes around 200 different router brands. In 2018, the vulnerability again gained traction as the vulnerable firmware was used in medical equipments that could potentially cause life-threatening attacks via IoT. Its severity was highlighted by ICS-CERT in its advisory, thereby. References Firmware Computer security exploits
308288
https://en.wikipedia.org/wiki/Portage%20%28software%29
Portage (software)
Portage is a package management system originally created for and used by Gentoo Linux and also by Chrome OS, Calculate, Sabayon, and Funtoo Linux among others. Portage is based on the concept of ports collections. Gentoo is sometimes referred to as a meta-distribution due to the extreme flexibility of Portage, which makes it operating-system-independent. The Gentoo/Alt project is concerned with using Portage to manage other operating systems, such as BSDs, macOS and Solaris. The most notable of these implementations is the Gentoo/FreeBSD project. There is an ongoing effort called the Package Manager Specification project (PMS), which aims to standardise and document the behaviour of Portage, allowing the ebuild tree and Gentoo system packages to be used with alternative package managers such as Paludis and pkgcore. Its goal is to specify the exact set of features and behaviour of package managers and ebuilds, serving as an authoritative reference for Portage. Overview Accessing Portage Portage is similar to the BSD-style package management known as ports, and was originally designed with FreeBSD's ports in mind. Portage is written in the Python programming language, and is the main utility that defines Gentoo. Although the system itself is known as Portage, it consists of two main parts, the ebuild system and emerge. The ebuild system takes care of the actual work of building and installing packages, while emerge provides an interface to ebuild: managing an ebuild repository, resolving dependencies and similar issues. (These two therefore have roughly the same relation as rpm has with yum, or dpkg has with APT.) A GTK+-based GUI, Porthole, is available for working with Portage. There is also the Himerge GUI, which stands for "Haskell Interface for eMerge". Functions provided Portage is characterized by its main function: compiling from source code the packages the user wishes to install. In doing so it allows customization of compiler and target-application options to fit the system's specifications and the user's own wishes. Functionalities related to system management include: allowing parallel package-version installation, tracking cross-package dependencies, managing a database of installed packages, providing a local ebuild repository, and synchronizing of the local Portage tree with remote repositories. Functionalities related to individual package installation include: specifying compilation settings for the target machine and choosing package components. Portage distinguishes between three levels of stability in ebuilds: stable (e.g., the software works as intended with no known security issues at time of release), keyword masked (mainly for packages that have not been sufficiently tested on the target system architecture to be considered stable) and hard masked (broken or very insecure) packages. Features Emerge The emerge command-line tool is the heart of Portage. The command is customizable with many options and modifiers. The emerge tool is the most important utility for accessing the features of Portage from the command line. The program calculates and manages dependencies, executes ebuilds and maintains the local Portage tree and database of installed packages. The compilation settings used by ebuilds can be changed through the CFLAGS environment variable, based on the specifications of the individual computer and on the user's desire for optimization. The emerge utility executes ebuilds in a sandbox environment. This way the system is protected from software executed by the ebuild and resulting binaries are only merged after a successful build and sandboxed install. What emerge installs as dependencies is affected by the USE flag-settings. They decide which optional features will be included when installing or upgrading an application. The emerge command can also be used to download and install precompiled binary files. USE flags The Portage system offers the use of "USE flags", which allows users to indicate which software features they would like to include (and exclude) while building packages. For example, there is a USE flag to include DVD support, where available, in packages compiled with the flag enabled. The USE flags affect which dependencies are required, generally affecting which optional features will be built into a given program when it is compiled. For example, in packages which use a configure script, the USE flag feature would translate to ./configure --with-feature. The specification of USE flags is the usual way to configure programs on Gentoo. USE flags may be set manually, or via user-friendly tools such as 'ufed' (USE flag editor), which lists flags along with their description. A list of available USE flags is available at the Gentoo website's USE Flag Index. ebuild Gentoo does not, by default, use binary packages as other package management systems do (like pacman or apt), employing instead a format known as the ebuild. Whereas RPM binaries are precompiled binaries, ebuilds are shell scripts with variables and functions which contain a description of the software, and instructions on how to obtain, configure, compile, and install it, more closely akin to (but more powerful than) the .spec files distributed in SRPMs. There are over 19,000 ebuilds available, the majority of which are distributed by the Gentoo mirrors. New and updated ebuilds can be obtained by synchronizing the local ebuild repository with the mirrors. This is done by executing the command emerge --sync. Historically, Gentoo has provided pre-compiled binary packages for many common programs, especially those which are lengthy to compile, such as Mozilla Firefox and OpenOffice.org. These are still installed with emerge, just by appending a "-bin" to the package name to instead install the binary version. Example of ebuild for Terminator:# Copyright 1999-2020 Gentoo Authors # Distributed under the terms of the GNU General Public License v2 EAPI=7 PYTHON_COMPAT=( python3_{6,7,8} ) DISTUTILS_USE_SETUPTOOLS="no" inherit distutils-r1 virtualx xdg-utils DESCRIPTION="Multiple GNOME terminals in one window" HOMEPAGE="https://github.com/gnome-terminator/terminator" SRC_URI="https://github.com/gnome-terminator/terminator/releases/download/v${PV}/${P}.tar.gz" LICENSE="GPL-2" SLOT="0" KEYWORDS="amd64 ~ppc x86" IUSE="dbus +libnotify" RDEPEND=" >=dev-libs/glib-2.32:2 dev-libs/keybinder:3[introspection] dev-python/configobj[${PYTHON_USEDEP}] dev-python/psutil[${PYTHON_USEDEP}] dev-python/pycairo[${PYTHON_USEDEP}] dev-python/pygobject:3[${PYTHON_USEDEP}] >=x11-libs/gtk+-3.16:3 x11-libs/vte:2.91[introspection] dbus? ( dev-python/dbus-python[${PYTHON_USEDEP}] ) libnotify? ( x11-libs/libnotify[introspection] ) " BDEPEND=" dev-util/intltool " distutils_enable_tests setup.py PATCHES=( "${FILESDIR}"/terminator-1.91-without-icon-cache.patch "${FILESDIR}"/terminator-1.91-desktop.patch "${FILESDIR}"/terminator-1.92-make-tests-fail.patch "${FILESDIR}"/terminator-1.92-metainfo.patch ) src_prepare() { xdg_environment_reset distutils-r1_src_prepare } src_test() { virtx distutils-r1_src_test } pkg_postinst() { xdg_desktop_database_update xdg_icon_cache_update } pkg_postrm() { xdg_desktop_database_update xdg_icon_cache_update } Binary packages Gentoo does have a binary packaging format, which is a .tbz2 file (tar with bzip2 compression) with additional metadata. This feature enables the building of binary packages on one system (using Portage's buildpkg or quickpkg) followed by quick installation on other, identical systems (with Portage's getbinpkg or emerge -K). See Portage Features in the Gentoo Linux Handbook for more information. Masking Masking is how Gentoo determines which packages are suitable for a system. Ebuilds designed for different architectures or experimental software are usually masked in a manner which prevents a stable system from installing them without user intervention. Packages that generally just require some testing but will often work fine are said to be keyword masked (i.e. they are available for systems with an ACCEPT_KEYWORDS make.conf entry starting with the character ~, such as ~x86, ~amd64, ~ppc). The standard way to unmask an individual keyword masked package is by adding a file with the full package name and keyword to /etc/portage/package.accept_keywords/. Users can make subdirectories here as well, allowing for custom organization. For example, if a masked package had multiple masked dependencies, the user could make a directory with the name of the original masked package, and put all the mask files for the package and its dependencies in that directory. This scheme replaces the older scheme of having /etc/portage/package.accept_keywords as a text file list. Packages with known problems or not considered mature enough to be candidates for stable are hard masked by one of the various package.mask files in /usr/portage/profiles, and such entries are generally accompanied by a comment from developers explaining the reason for the mask. Gentoo/Alt Gentoo/Alt is a project created to manage porting the portage framework and other features to other operating systems, such as Mac OS X and the free BSDs. Gentoo/Alt was set up by Pieter Van den Abeele and Daniel Robbins after Pieter Van den Abeele founded Gentoo for Mac OS X. Mac OS X Gentoo for Mac OS X was the first non-Linux project of Gentoo and focused on making the Gentoo experience available on Apple's operating system by introducing the Portage system as a separate entity. This was roughly similar to Fink and MacPorts, but it used Portage instead of a Debian-like or Ports-like system. Later on, Gentoo for Mac OS X was made a subproject of Gentoo/Alt. The project is no longer active, because its prime assumption of using and not modifying the host OS appeared not to be realistic and eventually broke most packages or made them hardly maintainable. Gentoo for Mac OS X has been superseded by Gentoo Prefix, which is currently what Gentoo offers to Mac OS X users. The Prefix project on Mac OS X is tested and supported on Mac OS X Tiger, Leopard and Snow Leopard on PowerPC, IA-32, and x86-64 architectures. FreeBSD Gentoo/FreeBSD brings the Gentoo Linux design, structure, and tools such as Portage and the Gentoo Linux base layout to the FreeBSD operating system. Gentoo's GNU toolchain is used instead of the original FreeBSD one. Although the project is no longer maintained by any active Gentoo developers, there are instructions that allow installation of a full Gentoo/FreeBSD system. FreeBSD system ebuilds are integrated into the main portage tree, but this port is far from being complete due to the amount of packages needing to be ported and the lack of a proper Live CD (FreeSBIE's Live CD or FreeBSD setup CD is used during installation). The current (semi) official logo for Gentoo/FreeBSD is a daemonized "g", derived from original Gentoo Linux logo and inspired by the BSD Daemon. It was designed by Marius Morawski, responding to an unofficial contest launched by Diego Elio Pettenò on his blog. NetBSD Gentoo/NetBSD is a project to provide a GNU userland managed by Portage with a NetBSD kernel. The project was started by Damian Florczyk. Only the x86 architecture is targeted and the system as a whole is in an incomplete state. OpenBSD Gentoo/OpenBSD is a Gentoo/*BSD subproject to port Gentoo features such as Portage to the OpenBSD operating system. It was originally started by Grant Goodyear and is currently on development hiatus. An ISO image based on OpenBSD 3.8 is currently maintained by Karol Pasternak and can be downloaded from the project's web site. DragonFlyBSD Gentoo/DragonFlyBSD is a currently unofficial port to the DragonFlyBSD kernel. The project is developed by Robert Sebastian Gerus. GNU Hurd An unofficial port to GNU Hurd was also being developed, but was abandoned in late 2006. Android Led by Benda Xu, "Gentoo RAP for Android Devices" installs a variant of Gentoo called Gentoo RAP in a directory prefix alongside Android. The Linux kernel is used by both Android and Gentoo RAP. Interix Gentoo/Interix (eprefix) is a port of Gentoo that runs atop the Interix Subsystem for Windows which is also known as Microsoft Windows Services for UNIX (SFU) or Subsystem for Unix-based Applications (SUA). A result of the Gentoo/Interix project is the ability to install and use the Portage system to emerge native Windows applications (requires Visual Studio, 2008 Express Edition will do too). However, this feature does not support the wide variety of packages supported by other platforms (including Interix). Portaris An unofficial port to the Solaris operating system, "Portaris", was announced around 2006, but never got into a public state. It was superseded by the Gentoo Prefix project. Plan 9 Started in 2011 as a Google Summer of Code project, Plan 9 For Gentoo takes the Gentoo base and overlays a userspace inspired by Plan 9. The stated objectives of the project are to have better hardware support, better software support and to create a "culture shock" for people that have previously used Linux systems. The project is maintained by Robert Seaton. See also pkgsrc References External links Official documentation Funtoo Portage Documentation Gentoo Portage Features Gentoo Portage Introduction Working with Portage Gentoo Packages, information about currently available ebuilds Unofficial documentation Gentoo-Portage, alternative presentation of current ebuild information Using Portage Installing portage on other Linux distros Similar software Emerde, portage-like project for Slackware Pkgcore, bsd re-implementation of portage Paludis, a package manager which can be used with ebuilds and other formats Arch Build System a ports-like system for Arch Linux Free package management systems Free software programmed in Python Gentoo Linux Linux package management-related software
3651356
https://en.wikipedia.org/wiki/Program%20temporary%20fix
Program temporary fix
In IBM terminology a Program temporary fix or Product temporary fix (PTF), sometimes depending on date, is a single bug fix, or group of fixes, distributed in a form ready to install for customers. A PTF normally follows an APAR (Authorized Program Analysis Report), and where an "APAR fix" was issued, the PTF "is a tested APAR" or set of APAR fixes. However, if an APAR is resolved as "Fixed If Next" or "Permanent Restriction" then there may be no PTF fixing it, only a subsequent release. PTF installation Initially, installations had to install service via a semi-manual process. Over time, IBM started to provide service aids such as IMAPTFLE and utilities such as IEBEDIT to simplify the installation of batches of PTFs. For OS/360 and successors, this culminated in System Modification Program (SMP) and System Modification Program/Extended (SMP/E). For VM, this culminated in Virtual Machine Serviceability Enhancements Staged (VM/SP SES) and VMSES/E. For DOS/360 and successors, this culminated in Maintain System History Program (MSHP) PTF usage PTFs used to be distributed in a group on a so-called Program Update Tape (PUT) or Recommended Service Upgrade (RSU), approximately on a monthly basis. They can now be downloaded straight to the system through a direct connection to IBM support. In some instances IBM will release a "Cumulative PTF Pack", a large number of fixes which function best as a whole, and are sometimes codependent. When this happens, IBM issues compact discs containing the entire PTF pack, which can be loaded directly onto the system from its media drive. One reason for the use of physical media is size, and related (default) size limits. "By default, the /home file system on VIOS (Virtual I/O Server) for System p is only 10GB in size." If the "Cumulative PTF Pack" is larger than the default, "If you try (to) FTP 17GB of ISO images you will run out of space." The PTFs are processed using SMP/E (System Modification Program/Extended) in several stages over a course of weeks. Receive (load) the proper PTF to the HA (non-production) system Apply Check to make sure necessary prequisite PTFs are installed, or are being installed, and that none have been marked "held" for errors or other problems Apply the PTF to the HA system, and review the system's performance over multiple weeks to ensure it does not hinder your productivity or functionality Receive and apply the PTF to the production system and monitor its effects for several weeks If the system is adversely affected by the PTF, a system administrator may sometimes reject (un-apply) the PTF and seek further support from IBM. However, if no problems are found after the PTF is applied, it can be permanently installed, accepted, or committed, to the system. These repairs to IBM software are often in response to APARs submitted by customers and others and acted on by IBM, and are a common first step to resolving software errors. It is generally expected by the customer that the problem would be fully corrected in the next release (version) of the relevant product. Comparison to APAR At times IBM software has a bug. Once IBM has ascertained that the cause is not one of third-party hardware non-IBM software -or- user-specified configuration errors, IBM support staff, if they suspect that a defect in a current release of an IBM program is the cause, will file a formal report confirming the existence of an issue. This is referred to as an Authorized Program Analysis Report (APAR). See APARs also include information on known workarounds information on whether a formal fix is scheduled to be included in future releases, and whether or not a Program Temporary Fix (PTF) is planned. Summary There are at least 2 levels of fix: The APAR may result in an "APAR fix." a permanent correction called a PTF. The focus of the "APAR fix" is "to rectify the problem as soon as possible" whereas the PTF "is a tested APAR... The PTF 'closes' the APAR. " Prior to that, an APAR is "a problem with an IBM program that is formally tracked until a solution is provided.” Humor Customers sometimes explain the acronym in a tongue-in-cheek manner as permanent temporary fix or more practically probably this fixes, because they have the option to make the PTF a permanent part of the operating system if the patch fixes the problem. See also Microsoft Update Catalog Patch (computing) Patch Tuesday SMP/E Software Updater Windows Server Update Services Windows Update References External links IBM Authorized Program Analysis Report IBM software Software maintenance
29734166
https://en.wikipedia.org/wiki/Venues%20of%20the%201984%20Summer%20Olympics
Venues of the 1984 Summer Olympics
For the 1984 Summer Olympics, a total of thirty-one venues were used. The Los Angeles Memorial Coliseum and the Rose Bowl, two venues previously used for the 1932 Summer Olympics, were used for the 1984 Games. Between the 1932 and the 1984 Summer Olympics, the expansion of professional sports teams assisted in the growth of the facilities that would be used for the 1984 events. Only two new permanent venues were constructed, both using corporate sponsorship, though neither were mentioned in the official Olympic report. Many other venues had temporary adjustments and returned to their normal usage once the 1984 Olympics were completed. Stanford Stadium in Palo Alto and the Rose Bowl later served as venues for the Super Bowl, the FIFA World Cup, and the FIFA Women's World Cup. Venues Los Angeles Elsewhere in Southern California Other venues Demonstration sports Before the Olympics When the Summer Olympics came to Los Angeles in 1932, two of the venues that hosted were the Los Angeles Memorial Coliseum and the Rose Bowl in Pasadena. The former hosted the athletics, equestrian eventing and jumping, field hockey, and gymnastics event while the latter was constructed into a velodrome for track cycling events. In 1973, the Coliseum played host to Super Bowl VII where the Miami Dolphins defeated the Washington Redskins 14-7 to go undefeated for the entire 1972 National Football League (NFL) season. In 1979, the Los Angeles Rams won their seventh straight NFL National Football Conference (NFC) Western Division title, and finally advanced to Super Bowl XIV where they lost to the Pittsburgh Steelers in the Rose Bowl. That season would also be the Rams' last season at the Coliseum where they lost their last game there against the New Orleans Saints 29-14. For the 1980 NFL season, the Rams moved to Anaheim Stadium (Angel Stadium of Anaheim since 2004) in Anaheim (southeast of Los Angeles), although they lost their first game there against the Detroit Lions. The Rams would remain at Anaheim Stadium until the end of the 1994 season when they moved to St. Louis, Missouri where they remained until 2015 before returning to Los Angeles in 2016. Their last game in Anaheim was against the Redskins, losing 24-21. Following the 1981 NFL season, the Coliseum became home for the Oakland Raiders NFL team, where they opened up their 1982 season results with a win over the defending Super Bowl champions San Francisco 49ers 23-17. The Coliseum since 1923 has continued to play host for the University of Southern California football team and still does as of 2010. The Trojans' cross-town rivals, the UCLA Bruins, shared the Coliseum with Trojans from 1928 to 1981. In 1982, the Bruins moved to the Rose Bowl where they have remained as of 2020. Besides Super Bowl XIV, the Rose Bowl hosted Super Bowl XVII where the Redskins avenged their Super Bowl loss to the Dolphins from ten years earlier with a 27-17 victory. Santa Anita Park opened in 1934. Normally used for Thoroughbred horse racing, the home stretch of the track was converted to house dressage, eventing, and jumping events for the 1984 Summer Games, including stands. Seven years after the 1932 Summer Games, a Southern California Committee for the Olympic Games (SCCOG) was created in an effort to bring the Summer Olympics back to Los Angeles. The first attempt was for the then-cancelled 1940 Summer Olympics in Tokyo. Following World War II, Los Angeles and SCCOG made bids for the 1952 and 1956 Summer Olympics, losing to eventual winners Helsinki and Melbourne, respectively. The United States Olympic Committee selected Detroit over Los Angeles for the United States's bid for the Summer Olympics between 1960 and 1972 without success. SCCOG did provide assistance to Squaw Valley in the northern part of the state near Lake Tahoe for the 1960 Winter Olympics. Los Angeles first bid for the 1976 Summer Olympics was in 1967, though it had to beat its northern neighbor San Francisco for being the American representative in the International Olympic Committee (IOC) bidding in 1969. Montreal was awarded the 1976 Summer Olympics over Moscow and Los Angeles. New leadership in SCCOG in 1972 along with lessons learned helped Los Angeles in bidding for the 1980 Summer Olympics though this time they would lose out to Moscow for the 1980 Summer Olympics. Los Angeles bid for the 1984 Summer Olympics with a minimal amount of new construction costs and a reliance on corporate sponsorships (unlike Montreal 76 and Moscow 80 that were government funded with high construction costs). Bid studies were done in Los Angeles between 1975 and 1978. The city was awarded the 1984 Games in 1978 by the IOC since they were the sole bidder. In 1959, the Los Angeles Memorial Sports Arena was completed. The following year, it hosted the 1960 Democratic National Convention. Following the 1959-60 NBA season, the National Basketball Association (NBA) Lakers franchise would move from Minneapolis to Los Angeles, where they would use the Los Angeles Memorial Sports Arena from the 1960-61 to the 1966-67 season before they moved to The Forum in Inglewood, a Los Angeles suburb, for the 1967-68 season. From 1959 to 1964, the Sports Arena served as home for the UCLA Bruins men's basketball team until the Bruins moved to Pauley Pavilion on the UCLA campus. The USC Trojans men's basketball team also used the Sports Arena as home. Lake Casitas was formed when the Ventura River was dammed in 1962. This was done for drinking water purposes in the Oak View area. In 1966, Los Angeles was awarded a National Hockey League (NHL) franchise, the Los Angeles Kings. They spent the first two months of their inaugural season at the Long Beach Arena before joining the NBA's Lakers at The Forum. Only two new permanent venues were constructed for the Games: the Olympic Velodrome on the California State University, Dominguez Hills campus and the Olympic Swim Stadium on the University of Southern California campus. The Velodrome was constructed between 1981 and 1982, while the Swim Stadium was constructed between 1980 and 1983. 7-Eleven convenience stores sponsored the Velodrome, while McDonald's sponsored the Swim Stadium, though neither corporate name was mentioned in the official Olympic report. Temporary venues were set up for El Dorado and Prado Parks. During the Olympics At the Coliseum, Carl Lewis of the United States matched the four gold medals set by Jesse Owens at the 1936 Summer Olympics in Berlin with four golds of his own in the men's 100 m, 200 m, 4 × 100 m relay, and long jump events. During the inaugural women's marathon event, Honduras's Leda Díaz de Cano fell 6.5 minutes behind the lead pack after and 27.5 minutes after , eventually leaving the course. During the cycling men's individual road race, a crowd of 300,000 lined the route. After the Olympics Stanford Stadium, the host to some of the football preliminaries, played host to NFL's Super Bowl XIX in January 1985, where the 49ers defeated the Dolphins 38-16. The stadium is still home to the Stanford University football team, even after it was reconstructed in 2006, reducing its capacity to 50,000. For the 1994 FIFA World Cup, Stanford and the Rose Bowl both hosted matches. Stanford hosted the quarterfinal match between Sweden and Romania while the Rose Bowl hosted the final match between Brazil and Italy, both matches were decided by penalty kicks. Five years later, the two venues would be used to host FIFA Women's World Cup matches. Stanford hosted the semifinal match between the United States and Brazil while the Rose Bowl hosted the final match between the United States and China, also decided in a shootout. Prior to the World Cups, the Rose Bowl also hosted Super Bowls XXI and XXVII. The Rose Bowl remains the venue for UCLA's football team while Pauley Pavilion remains the venue for UCLA's basketball teams. The Raiders remained at the Coliseum until the end of the 1994 NFL season. The last Raiders game played at the Coliseum was a 19-9 loss to the Kansas City Chiefs. Next year, the Raiders returned to Oakland where they remained until 2020 when they moved to their current home in Las Vegas, Nevada. In January 2016, the then-St. Louis Rams received approval from the NFL to return to Los Angeles, returning to the metropolitan area after a 21-year exodus; the Rams played most of their home schedule in the Coliseum form 2016 to 2019 while SoFi Stadium was under construction in Inglewood (The Rams and Los Angeles Chargers (formerly San Diego) moved into SoFi in 2020.). Both association football venues on the East Coast of the United States, Harvard Stadium and Navy–Marine Corps Memorial Stadium, remain in use to this day. Harvard Stadium, on the campus of Harvard University, is best known as home to the school's (American) football team, but is also home to Harvard's teams in men's and women's lacrosse, and has been used for several other sports. Navy–Marine Corps Memorial Stadium, on the grounds of the United States Naval Academy, hosts the Academy's football, men's lacrosse, and women's lacrosse teams. Since the 2013 edition, the Military Bowl, a college football bowl game, has been played at the stadium as well. The Los Angeles Memorial Sports Arena remained home to the University of Southern California's basketball team until the 2005-06 NCAA basketball season when the Trojans moved on campus. For the 1984-85 NBA season, the Clippers franchise relocated from San Diego to Los Angeles where they played at the Los Angeles Memorial Sports Arena. The Clippers shared its home between the Los Angeles Memorial Sports Arena and the Arrowhead Pond of Anaheim (Honda Center since 2006) from the 1994-95 to the 1997-98 NBA seasons. The Los Angeles Memorial Sports Arena was closed in March 2016, and demolished between September and October 2016. Banc of California Stadium, a soccer-specific stadium and home of Los Angeles FC of Major League Soccer, was constructed on the site of the Sports Arena and opened in April 2018. In late 1999, the Staples Center opened in downtown Los Angeles. For the 1999–2000 NBA and NHL seasons, the NBA's Lakers and Clippers, and the NHL's Kings all moved out of the Los Angeles Sports Arena and the Forum in Inglewood, and moved into the Staples Center, renamed Crypto.com Arena in early 2022. The Olympic Velodrome located on the California State University, Dominguez Hills campus was demolished in 2003. Meanwhile, the Olympic Swim Stadium remains in use on the University of Southern California's campus both for recreation and for competition. Many of the venues which were used during the 1984 Olympics will host events at the 2028 Summer Olympics when Los Angeles hosts the games for a third time. External links References 1984 Summer Olympic venues
17199167
https://en.wikipedia.org/wiki/Wind%20energy%20software
Wind energy software
Specialized wind energy software applications aid in the development and operation of wind farms. Pre-feasibility and feasibility analysis The RETScreen software wind power model is designed to evaluate energy production and savings, costs, emission reductions, financial viability and risk for central-grid, isolated-grid and off-grid wind energy projects, for multi-turbine and single-turbine hybrid systems. Developed by the Government of Canada, the software is free, multilingual, and includes links to wind energy resource maps. The Wind Data Generator (WDG) is a Wind Energy Software tool capable of running WRF (Weather Research and Forecasting) model to create a wind atlas and to generate wind data at resolutions of 3 km to 10 km. Turbine design Software helps design wind turbines. There are several aero-elastic packages that are used in this design process. FOCUS6 aids in the design of wind turbines and turbine components such as rotor blades. It was developed by Knowledge Centre Wind turbine Materials and Constructions (WMC) and Energy Research Centre of the Netherlands (ECN). The National Wind Technology Center (NWTC), a division of the U.S. National Renewable Energy Laboratory (NREL), has developed many packages which are used by turbine manufacturers and researchers. NWTC has developed a suite of turbine design and performance prediction codes which rely on Blade Element Momentum (BEM) theory. WT_Perf uses steady BEM theory to model turbine performance. FAST is a comprehensive aero-elastic simulator which uses unsteady BEM theory to model a turbine as a collection of rigid and flexible bodies in a spatiotemporal field of turbulent flow. Germanischer Lloyd found FAST suitable for "the calculation of onshore wind turbine loads for design and certification." The open source software QBlade developed by the wind energy research group of Hermann Föttinger Institute of TU Berlin (Chair of Fluid Dynamics) is a BEM code coupled with the airfoil simulation code XFOIL. It allows the user to develop/import airfoil shapes, simulate them and use them for the design and simulation of wind turbine blades/rotors with the use of steady state BEM theory. The software is built with the Qt framework thus it includes a graphical user interface. The open source software Vortexje, developed by Baayen & Heinz GmbH in Berlin, is an unsteady 3D panel method implementation suitable for dynamic simulation of vertical and horizontal axis wind turbines. Easily coupled with other simulation environments such as Simulink and Dymola, it is suitable for aerodynamic optimization, fluid-structure interaction problems, and unsteady control system simulation. Ashes is a software package for analyzing aerodynamic and mechanical forces for onshore and offshore horizontal axis wind turbines. It is based on research done at the Norwegian University of Science and Technology in Trondheim, Norway. Flow modeling Wind flow modeling software predicts important wind characteristics at locations where measurements are not available. Furow is a software which offers a lineal flow model and a Computational fluid dynamic model in the same software. WAsP was created at Denmarks' Risø National Laboratory. WAsP uses a potential flow model to predict how wind flows over terrain at a site. Meteodyn WT, Windie, WindSim, WindStation and the opensource code ZephyTOOLS use computational fluid dynamics instead, which are potentially more accurate, but more computationally intensive. Farm modeling This software simulates wind farm behavior, most importantly to calculate its energy output. The user can usually input wind data, height and roughness contour lines (topography), turbine specifications, background maps, and define environmental restrictions. Processing this information produces the design of a wind farm that maximizes energy production while accounting for restrictions and construction issues. Packages include Furow, Meteodyn WT, openWind, WindFarm, WindFarmer: Analyst, WindPRO, WindSim and WindStation. WakeBlaster is a specialised CFD service for modelling the wind farm wake losses. Farm visualization Wind farm visualization software graphically presents a proposed wind farm, most importantly for the purpose of obtaining building permits. The primary techniques include photomontages, zone-of-visual-impact maps and three-dimensional visualization (perspective views of the landscape often incorporating aerial photography and including turbines and other objects). Farm monitoring Wind farm monitoring software is a software that allows people to see if the wind turbines are running well or are going to become broken. Other functions of monitoring software is reporting, analysis of measurement data (power curve) and tools for monitoring of environmental constraints (bat control, etc.). Prediction software For existing wind farms, several software systems exist which produce short and medium term forecasts for the generated power (single farms or complete forecast regions) using existing numerical weather prediction data (NWP) and live (SCADA) farm data as input. Examples of numerical weather prediction models used for this purpose are the European HiRLAM (High Resolution Limited Area Model) and the GFS (Global Forecast System) from NOAA. References Wind power Computer-aided engineering software
2632722
https://en.wikipedia.org/wiki/Freesat
Freesat
Freesat is a British free-to-air satellite television service, first formed as a joint venture between the BBC and ITV plc. The service was formed as a memorandum in 2007 and has been marketed since 6 May 2008. Freesat offers a satellite alternative to the Freeview service on digital terrestrial television, with a broadly similar selection of channels available without subscription for users purchasing a receiver. The service also makes use of the additional capacity available on satellite broadcasting to offer a selection of 24 () high-definition channels from broadcasters including BBC, ITV, Channel 5, Arirang TV, Bloomberg, Daystar, FreeSports, Discovery Networks, France 24, NHK, RT UK and TRT World. Freesat's main competitors are Freeview and Virgin Media, and a combination of IPTV and free-to-air services offered by BT, Plusnet and TalkTalk, using YouView. In February 2021, it was announced that, subject to regulatory approval, Freesat was to merge its operation with Digital UK, the joint venture of BBC, ITV and Channel 4 which manages the broadcast, streaming and EPG of Freeview. In July 2021 Digital UK acquired Freesat from its two shareholders, the BBC and ITV and the merger was completed. History Background The BBC and ITV, the two biggest free-to-air broadcasters in the UK, make their services available digitally through three routes: free-to-air via digital terrestrial and digital satellite, and subscription-only via digital cable. On digital terrestrial, the channels have always been available free-to-air with the appropriate equipment. In 2007 Freeview was available to only 73% of the population. After analogue TV services were replaced in the digital switchover, this increased to 98.5% for the public service channels and 90% for the full 'Freeview' service. To provide more widespread coverage and a larger number of channels, a digital satellite alternative was felt necessary. Initially, both the BBC's and ITV's channels were encrypted since the original Astra satellites used for Sky broadcast to most of Europe but the broadcasters' rights for premium content such as films and sports typically covered the UK only. The use of encryption meant that anyone wishing to view the channels had to purchase equipment from Sky and pay for a free-to-view viewing card to decrypt the channels. Similarly, to use the Videoguard encryption, the broadcasters needed to pay a fee to NDS Group. Move to free-to-air In May 2003 the BBC moved most of its channels from the Astra 2A satellite to Astra 2D, which has a footprint that focuses more tightly on the UK. This move allowed the BBC to stop encrypting its broadcasts while continuing to meet its rights obligations. It dropped the encryption two months later. Two months later, ITV, whose channels had already been located on the Astra 2D satellite since launching on the Sky platform some years earlier, also made their channels free-to-air. On 18 November 2008, Channel 5 commenced broadcasting a single channel via Freesat, eventually adding its ancillary services 5USA and 5* (now known as 5STAR) three years later in December 2011. It later added Channel 5 HD to Freesat following the removal of BBC Three (BBC iii) as an SD and HD TV channel (when it moved to on line and BBC iPlayer only). Viva moved from free-to-view to free-to-air on satellite on 19 March 2013, before launching on Freesat on 2 April 2013. On 2 April 2013, all seven of Box Television's channels left Sky's subscription package, with six becoming free-to-air on satellite; on 15 April four of the channels – The Box, Kerrang! TV, Kiss TV and Smash Hits – were added to the Freesat EPG. This was followed by Heat and Magic on 29 April. The free-to-air channels can be received using any standard satellite (DVB-S) receiver, although those not licensed by Freesat will need to be re-tuned manually if/when channel frequencies are changed. (See next section.) Managed service The Freesat project aims to provide a managed service with an Electronic Programme Guide and interactive features similar to the Freeview service launched three years earlier. Unlike Freeview, however, these features are only available on approved receivers manufactured under licence from Freesat. The initial plan was to launch the service in early 2006. This was postponed to Autumn 2007 as approval from the BBC Trust was only received in April 2007. However, the service was further delayed and was officially launched on 6 May 2008. Launch channels The service launched officially on 6 May 2008. From the launch, Freesat advertised all national television channels from the BBC and ITV as being available on the platform (excluding ITV2 +1), as well as all national BBC radio networks. Channel 4 also managed to make most of its channels free-to-air in preparation for the launch. In addition some channels from other broadcasters such as Chello Zone, CSC Media Group, Al Jazeera English, Zee Live, Zee News, RIA Novosti and Euronews were included on the channel list. High-definition BBC HD was the only high-definition channel available on Freesat from launch day, with ITV HD added as a "red-button" interactive service from 7 June 2008. On 2 April 2010 ITV HD changed from an interactive service to a full-time channel called ITV1 HD, simulcasting the main ITV1 channel. The name was changed back to ITV HD on 14 January 2013. BBC One HD, a high-definition simulcast of BBC One, was made available on Freesat and other platforms on 3 November 2010. Channel 4 HD also became available on the platform on 19 April 2011, but was withdrawn on 22 February 2018. NHK World HD was added to Freesat on 9 May 2011; it shared its channel number with its standard definition counterpart and was therefore only listed on high definition receivers, which were unable to access the standard definition channel via the EPG. (The SD channel ceased transmission on 1 October 2011.) On 23 July 2012, the BBC added 24 temporary channels to cover the 2012 Summer Olympics, the channels share their EPG slot with their standard definition counterpart. On 29 August 2012, Channel 4 added three temporary channels covering the 2012 Summer Paralympics in high definition from the following day; the three channels also share their EPG slots. On 14 February 2013, RT HD was added to Freesat, sharing its channel number with its standard definition simulcast. On 26 March 2013, BBC HD was replaced by a high-definition simulcast of BBC Two. On 13 June 2013, an HD stream of the BBC Red Button was temporarily made available on the EPG. On 16 July 2013, the BBC announced they would be launching five new HD channels in early 2014. The five channels (HD simulcasts of BBC Three, BBC Four, CBBC, CBeebies and BBC News) were in fact launched early – on 10 December 2013. BBC Three and BBC Three HD have been rebranded as BBC iii and now only available on the BBC i player, but in HD if your television is HD ready and can be connected to the internet either directly (using an Ethernet cable) or via wi-fi or via your computer online. You now have to pay a licence fee to receive all BBC TV channels legally on all devices. Channel 4 pulled its 4HD service from Freesat on 22 February 2018, along with its catch up on demand ALL4 service. It is claiming Freesat has significantly increased Channel 4's fee for the Freesat platform. Freesat claim however that they have not increased their fee for the ALL4 catch up service. Channel 5 HD and S4C HD are continuing to broadcast on Freesat, along with all the Channel 4 SD (standard definition) channels including Channel 4 SD, Film 4, E4, 4Seven and the +1 channels. Channel 4 HD returned to Freesat on 8 December 2021. Regional variations Some channels (notably BBC One and ITV) are transmitted in regional variations and the appropriate services are selected by the Freesat receiver from the user's postcode. In March 2010, ITV altered several of their regions from free-to-air transmission to free-to-view (because they were moved to a satellite from which transmission covers a much larger area than just the UK and content licensing means that they had to be encrypted). As a result, a few Freesat viewers (who cannot receive free-to-view, encrypted content) were moved to regional variations not corresponding to their actual location. Other available regions may be chosen by putting a different post code into the box for the preferred television region. BBC English Regional content is only available in SD, whereas Wales, Scotland and Northern Ireland programmes are in HD on BBC One only. Future channels and recent launches On 11 October 2021 That's TV Gold launched on Freesat channel 178, broadcasting classic TV entertainment, music and films from the 1960s to today, 24 hours per day. On 8 December 2021, the high-definition version of Channel 4 re-joined the Freesat line up (it had left in February 2018) and Channel 4's music channels, Box Hits, The Box, Kiss, Magic and Kerrang!, also started broadcasting again on the platform. Video on demand The BBC began to roll out a beta version of BBC iPlayer for Freesat devices in early 2010. BBC iPlayer is an internet based service with around 2500 hours of television being available on demand. ITV Hub is available for Humax increased to a 30-day catch up service from the original 7-day service but with a larger ITV ident/logo in the top left of the screen], Manhattan, and some Sagemcom devices, however it is not available on 2nd Generation Humax devices to viewers using DG and TD postcodes. It is not clear why this is the case. In 2010, Freesat also indicated an intent to launch a receiver featuring the YouView service (then called 'Project Canvas') and said that the video on demand services 4oD (now All 4) and Demand 5 were under consideration On 28 July 2011, the BBC Trust approved proposals to introduce the listing of pay content delivered on-demand via broadband. The trust will allow the BBC to continue to play a part in Freesat as the plans did not represent a significant change to the approval previously given in 2007. There was no need for a Public Value Test or for further regulatory process. Under the plans some pay content, such as films, would be added to the Freesat EPG alongside the existing free-to-air content. However, there would be no adult material or live streamed sports coverage. Freesat itself will not supply any of the on demand content, but will allow third parties to do so through its EPG. Some content will also be made available through existing channels using an on-screen prompt that would take viewers to an on demand environment. Pay-TV sales would be handled by a third party, with Freesat operating the conditional access system that would underpin it. The plan is to use the upcoming launch of G2 spec receivers to add support for Digital Rights Management and where technically possible on existing receivers. On 29 November 2011, a beta trial for the subscription based on demand movie service BoxOffice365 was added to the Freesat EPG. On 11 March 2013, BoxOffice365 withdrew from Freesat. The Freetime guide also features a backwards EPG and a Showcase section offering recommendations. HTML versions of BBC iPlayer and ITV Hub will also launch, both services use MHEG-5 on first generation devices. YouTube launched on Freetime receivers on 7 March 2013, the first deployment of YouTube's HTML app in a Western European TV service. 4oD launched on Freesat's Freetime receivers on 27 June 2013, making Freesat the first UK TV platform to host the HTML5 version of 4oD. Demand 5 arrived on Freesat on 6 August 2013. In December 2015, Freesat announced the availability of an application for the Netflix subscription TV streaming service on the Humax HDR-1000S, HDR-1010S, HDR-1100S and HB-1000S Freetime receivers. Netflix is expected to be available on other Freetime receivers in the future. In January 2016, Saorview, the Irish free TV service announced it would be launching an online catch-up and on-demand TV service using the commercial version of the Freetime software, including EPG roll-back and remote recording, with Freesat providing a fully managed service. In September 2016 the BBC closed the MHEG version of the BBC iPlayer (V2) used on some older connected TVs and receivers (manufactured between 2008 and 2014). Freesat Freetime equipment using the HbbTV version was unaffected. The BBC recommended that consumers should purchase replacement receivers to continue to receive iPlayer. In October 2016 STV Player was made available to Freesat viewers. The online live streaming and catch-up service operated by STV shows ITV programmes and content from its own archive. All4 (formerly 4OD) was added to the [Humax] Freesat service soon after its launch online; however, due to fee increases at start of 2018, Channel 4 have withdrawn the All4 on-demand service from all boxes that carry the Freesat Freetime service, from 22 February 2018 as well as taking Channel 4HD from all HD-capable Freesat devices. In September 2018, catch-up service, UKTV Play joined the Freesat on-demand services, offering catch-up programming from UKTV's channels including Dave, Yesterday, Really and Drama. In March 2019, Channel 5's Demand 5 app providing catch-up for Channel 5, 5 STAR, 5 USA, 5 SELECT, 5 Spike and Paramount Network was replaced on Freesat by My5, with a change in user interface, a selection of programmes from partner channels, BLAZE, BET, PBS America, Real Stories, Spark and Together, and some programmes shown online exclusively or ahead of broadcast. Reception equipment Receivers At the launch of the service, there were two types of Freesat receivers available —standard definition-only receivers and high definition-capable receivers. As of July 2010 there were eleven companies licensed to produce Freesat boxes and televisions. Humax launched a Freesat recorder, Freesat+, which became available to the public in November 2008. On 17 October 2012, Humax released the first Freetime receiver, the Humax HDR-1000S. In September 2018, Freesat announced that Arris International would produce Freesat's 3rd generation set-top box. The new boxes, which support 4K, were eventually released in February 2020 by CommScope, who purchased Arris International in 2019. Televisions Following the initial launch, Panasonic introduced three plasma televisions with integrated HD Freesat receivers. At the end of October 2008, Panasonic brought out two more sizes which are the 32" and 37". In April 2009 LG launched four LCD TVs with built-in Freesat receivers. The LG series is the LF7700 (discontinued mid-2010), with screen sizes of 32", 37", 42" and 47". Sony have released two televisions with Freesat receivers, the W5810 and Z5800 series, available from sizes 32" up to 52" and in 100 Hz and 200 Hz alternatives. Satellite dish The service makes use of the same group of Astra satellites at 28.2°E as the Sky subscription satellite service. This means that a satellite dish which is positioned to receive these services will be capable of receiving Freesat, with the addition of a suitable receiver (or Television with receiver built in). Provided that the LNB has sufficient outputs, a single dish may be used to receive multiple services (i.e. Sky and Freesat). However, a dish with an LNB for the Sky Q service cannot be used with a Freesat receiver because Sky Q uses a wideband LNB which is incompatible with Freesat receivers. A "Hybrid" Sky Q LNB can be used as it has both wideband outputs for Sky Q and conventional universal LNB outputs for non-Sky Q receivers, including Freesat. In 2020 a new generation of Freesat boxes launched, these work with regular LNBs and also Sky Q LNB's. For users who do not currently have a satellite dish, Freesat offers an installation service which is made available through retailers and which is advertised in a leaflet included with Freesat receivers. A suitable dish may also be installed by the user or a non-Freesat-affiliated installer. Ireland While Freesat systems work in Ireland, official branded Freesat receivers are not widely or officially marketed in the country. On 23 October 2008 several Irish retail chains including Maplin, Tesco Ireland and PowerCity began offering modified Freesat systems from Grundig and Alba that had their Freesat branding replaced with a Sat4free brand name. The requirement to enter a UK postcode was removed; the systems were instead set to use the version of the electronic programme guide appropriate for Northern Ireland through a hard-coded Belfast postcode. Sat4free was permanently closed down shortly after. Outside the UK and Ireland Although not intended for reception outside of the UK, it is possible to receive Freesat outside of the UK and Ireland but a larger dish is required as the UK beams of Astra 2E, Astra 2F and Astra 2G (which carry the majority of channels from the BBC, ITV, Channel 4 and Channel 5) have footprints tightly focused on the UK and Ireland. Also, Freesat receivers are not widely or easily available outside the UK, without which the full Freesat EPG is not accessible. Technical details Freesat broadcasts from the same fleet of satellites (Astra 28.2°E) as Sky. Channels are broadcast using DVB-S. Freesat's role is not broadcasting or availability of channels (although the BBC and ITV are substantial broadcasters in their own right) but instead providing a platform for receiving the channels and the EPG. All of the standard definition channels on Freesat are broadcast using DVB-S; ITV HD, NHK World HD and RT HD also use DVB-S. BBC One HD and BBC HD used DVB-S until 6 June 2011 when the satellite transponder carrying them was upgraded to DVB-S2. Channel 4 HD had launched using DVB-S2 but the transponder was downgraded to DVB-S on 28 March 2012. Standard definition channels are broadcast using MPEG-2, while high definition channels are broadcast using MPEG-4. Interactive television is done using MHEG-5 rather than the proprietary OpenTV platform used by Sky. Channel 4 is no longer available in HD from 22 March 2018, it is again available here only in SD just like it was before 19 April 2011. Since the channels are broadcast in-the-clear, they can also be received by non-Freesat receivers, including Sky Digiboxes. The specification for Freesat boxes includes having an Ethernet port. This is to allow on demand programming from services such as BBC iPlayer or ITV Hub to be viewed directly on the customer's television. Open standards and technologies form the basis of Freesat's second generation Freetime receivers, including those from the Open IPTV Forum (OIPF), the Hybrid Broadcast Broadband TV (HbbTV) project and HTML5 browser technology, with the majority of the Freetime user interface built using the latter. The Freetime spec also includes features such as: DiSEqC 1.2 support; MoCA (Multimedia over Coax Alliance) support including single cable routing; HTML, JavaScript and CSS internet technologies for broadband-delivered interactive services; DRM for online content; and payment mechanisms for broadband services like LoveFilm. James Strickland, Freesat's director of product and technology development, explained that Freetime is a hybrid between HbbTV and MHEG-5. Adoption The table below shows the quarterly and cumulative sales of Freesat equipment in the first years of the service. In the Q4 2009 report (from the table above), Ofcom reported that Freesat had announced the 1 million mark had been hit by the end of February 2010. *Note: Ofcom stopped reporting Freesat Penetration Cumulatively with the Q3 2010 Report, and stopped quarterly reports at the end of 2010. Indications are that Freesat penetration largely plateaued in late 2010. In May 2017, Freesat announced that 904,000 devices capable of receiving the platform were sold in 2016, with the installed base increasing 58,000-year-on-year and passing 2 million during the course of the year. See also Freesat from Sky Freeview Saorsat – a free-to-air satellite service for Ireland Tivùsat - a free-to-air satellite service for Italy List of free-to-air channels at 28°E Astra 2E, Astra 2F, Astra 2G – satellites carrying Freesat (with channel lists) References External links BBC News: Q&A on Freesat List of Freesat (UK) Channels Radio and Telly: Freesat guide Freesat Section at Astra 2 2007 establishments in the United Kingdom Digital television in the United Kingdom Media and communications in the City of Westminster Organisations based in the City of Westminster
27471338
https://en.wikipedia.org/wiki/Object-oriented%20programming
Object-oriented programming
Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data and code: data in the form of fields (often known as attributes or properties), and code, in the form of procedures (often known as methods). A feature of objects is that an object's own procedures can access and often modify the data fields of itself (objects have a notion of or ). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types. Many of the most widely used programming languages (such as C++, Java, Python, etc.) are multi-paradigm and they support object-oriented programming to a greater or lesser degree, typically in combination with imperative, procedural programming. Significant object-oriented languages include: Java, C++, C#, Python, R, PHP, Visual Basic.NET, JavaScript, Ruby, Perl, SIMSCRIPT, Object Pascal, Objective-C, Dart, Swift, Scala, Kotlin, Common Lisp, MATLAB, and Smalltalk. History Terminology invoking "objects" and "oriented" in the modern sense of object-oriented programming made its first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes); Alan Kay later cited a detailed understanding of LISP internals as a strong influence on his thinking in 1966. Another early MIT example was Sketchpad created by Ivan Sutherland in 1960–1961; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction. Also, an MIT ALGOL version, AED-0, established a direct link between data structures ("plexes", in that dialect) and procedures, prefiguring what were later termed "messages", "methods", and "member functions". Simula introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding. The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports. In the 1970s, the first version of the Smalltalk programming language was developed at Xerox PARC by Alan Kay, Dan Ingalls and Adele Goldberg. Smalltalk-72 included a programming environment and was dynamically typed, and at first was interpreted, not compiled. Smalltalk became noted for its application of object orientation at the language-level and its graphical development environment. Smalltalk went through various versions and interest in the language grew. While Smalltalk was influenced by the ideas introduced in Simula 67 it was designed to be a fully dynamic system in which classes could be created and modified dynamically. In the 1970s, Smalltalk influenced the Lisp community to incorporate object-based techniques that were introduced to developers via the Lisp machine. Experimentation with various extensions to Lisp (such as LOOPS and Flavors introducing multiple inheritance and mixins) eventually led to the Common Lisp Object System, which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. In the 1980s, there were a few attempts to design processor architectures that included hardware support for objects in memory but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv. In 1981, Goldberg edited the August issue of Byte Magazine, introducing Smalltalk and object-oriented programming to a wider audience. In 1986, the Association for Computing Machinery organised the first Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), which was unexpectedly attended by 1,000 people. In the mid-1980s Objective-C was developed by Brad Cox, who had used Smalltalk at ITT Inc., and Bjarne Stroustrup, who had used Simula for his PhD thesis, eventually went to create the object-oriented C++. In 1985, Bertrand Meyer also produced the first design of the Eiffel language. Focused on software quality, Eiffel is a purely object-oriented programming language and a notation supporting the entire software lifecycle. Meyer described the Eiffel software development method, based on a small number of key ideas from software engineering and computer science, in Object-Oriented Software Construction. Essential to the quality focus of Eiffel is Meyer's reliability mechanism, Design by Contract, which is an integral part of both the method and language. In the early and mid-1990s object-oriented programming developed as the dominant programming paradigm when programming languages supporting the techniques became widely available. These included Visual FoxPro 3.0, C++, and Delphi. Its dominance was further enhanced by the rising popularity of graphical user interfaces, which rely heavily upon object-oriented programming techniques. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also enhanced the popularity of event-driven programming (although this concept is not limited to OOP). At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming (although this had been in common use in the 1960s or earlier). Modula-2 (1978) included both, and their succeeding design, Oberon, included a distinctive approach to object orientation, classes, and such. Object-oriented features have been added to many previously existing languages, including Ada, BASIC, Fortran, Pascal, and COBOL. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code. More recently, a number of languages have emerged that are primarily object-oriented, but that are also compatible with procedural methodology. Two such languages are Python and Ruby. Probably the most commercially important recent object-oriented languages are Java, developed by Sun Microsystems, as well as C# and Visual Basic.NET (VB.NET), both designed for Microsoft's .NET platform. Each of these two frameworks shows, in its own way, the benefit of using OOP by creating an abstraction from implementation. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language. Features Object-oriented programming uses objects, but not all of the associated techniques and structures are supported directly in languages that claim to support OOP. It performs operations on operands. The features listed below are common among languages considered to be strongly class- and object-oriented (or multi-paradigm with OOP support), with notable exceptions mentioned. Shared with non-OOP languages Variables that can store information formatted in a small number of built-in data types like integers and alphanumeric characters. This may include data structures like strings, lists, and hash tables that are either built-in or result from combining variables using memory pointers. Procedures – also known as functions, methods, routines, or subroutines – that take input, generate output, and manipulate data. Modern languages include structured programming constructs like loops and conditionals. Modular programming support provides the ability to group procedures into files and modules for organizational purposes. Modules are namespaced so identifiers in one module will not conflict with a procedure or variable sharing the same name in another file or module. Objects and classes Languages that support object-oriented programming (OOP) typically use inheritance for code reuse and extensibility in the form of either classes or prototypes. Those that use classes support two main concepts: Classes – the definitions for the data format and available procedures for a given type or class of object; may also contain data and procedures (known as class methods) themselves, i.e. classes contain the data members and member functions Objects – instances of classes Objects sometimes correspond to things found in the real world. For example, a graphics program may have objects such as "circle", "square", "menu". An online shopping system might have objects such as "shopping cart", "customer", and "product". Sometimes objects represent more abstract entities, like an object that represents an open file, or an object that provides the service of translating measurements from U.S. customary to metric. Each object is said to be an instance of a particular class (for example, an object with its name field set to "Mary" might be an instance of class Employee). Procedures in object-oriented programming are known as methods; variables are also known as fields, members, attributes, or properties. This leads to the following terms: Class variables – belong to the class as a whole; there is only one copy of each one Instance variables or attributes – data that belongs to individual objects; every object has its own copy of each one Member variables – refers to both the class and instance variables that are defined by a particular class Class methods – belong to the class as a whole and have access to only class variables and inputs from the procedure call Instance methods – belong to individual objects, and have access to instance variables for the specific object they are called on, inputs, and class variables Objects are accessed somewhat like variables with complex internal structure, and in many languages are effectively pointers, serving as actual references to a single instance of said object in memory within a heap or stack. They provide a layer of abstraction which can be used to separate internal from external code. External code can use an object by calling a specific instance method with a certain set of input parameters, read an instance variable, or write to an instance variable. Objects are created by calling a special type of method in the class known as a constructor. A program may create many instances of the same class as it runs, which operate independently. This is an easy way for the same procedures to be used on different sets of data. Object-oriented programming that uses classes is sometimes called class-based programming, while prototype-based programming does not typically use classes. As a result, significantly different yet analogous terminology is used to define the concepts of object and instance. In some languages classes and objects can be composed using other concepts like traits and mixins. Class-based vs prototype-based In class-based languages the classes are defined beforehand and the objects are instantiated based on the classes. If two objects apple and orange are instantiated from the class Fruit, they are inherently fruits and it is guaranteed that you may handle them in the same way; e.g. a programmer can expect the existence of the same attributes such as color or sugar_content or is_ripe. In prototype-based languages the objects are the primary entities. No classes even exist. The prototype of an object is just another object to which the object is linked. Every object has one prototype link (and only one). New objects can be created based on already existing objects chosen as their prototype. You may call two different objects apple and orange a fruit, if the object fruit exists, and both apple and orange have fruit as their prototype. The idea of the fruit class doesn't exist explicitly, but as the equivalence class of the objects sharing the same prototype. The attributes and methods of the prototype are delegated to all the objects of the equivalence class defined by this prototype. The attributes and methods owned individually by the object may not be shared by other objects of the same equivalence class; e.g. the attribute sugar_content may be unexpectedly not present in apple. Only single inheritance can be implemented through the prototype. Dynamic dispatch/message passing It is the responsibility of the object, not any external code, to select the procedural code to execute in response to a method call, typically by looking up the method at run time in a table associated with the object. This feature is known as dynamic dispatch. If the call variability relies on more than the single type of the object on which it is called (i.e. at least one other parameter object is involved in the method choice), one speaks of multiple dispatch. A method call is also known as message passing. It is conceptualized as a message (the name of the method and its input parameters) being passed to the object for dispatch. Encapsulation Encapsulation is a design pattern in which data are visible only to semantically related functions, so as to prevent misuse. The success of data encapsulation leads to frequent incorporation of data hiding as a design principle in object oriented and pure functional programming. If a class does not allow calling code to access internal object data and permits access through methods only, this is a strong form of abstraction or information hiding known as encapsulation. Some languages (Java, for example) let classes enforce access restrictions explicitly, for example denoting internal data with the private keyword and designating methods intended for use by code outside the class with the public keyword. Methods may also be designed public, private, or intermediate levels such as protected (which allows access from the same class and its subclasses, but not objects of a different class). In other languages (like Python) this is enforced only by convention (for example, private methods may have names that start with an underscore). Encapsulation prevents external code from being concerned with the internal workings of an object. This facilitates code refactoring, for example allowing the author of the class to change how objects of that class represent their data internally without changing any external code (as long as "public" method calls work the same way). It also encourages programmers to put all the code that is concerned with a certain set of data in the same class, which organizes it for easy comprehension by other programmers. Encapsulation is a technique that encourages decoupling. Composition, inheritance, and delegation Objects can contain other objects in their instance variables; this is known as object composition. For example, an object in the Employee class might contain (either directly or through a pointer) an object in the Address class, in addition to its own instance variables like "first_name" and "position". Object composition is used to represent "has-a" relationships: every employee has an address, so every Employee object has access to a place to store an Address object (either directly embedded within itself, or at a separate location addressed via a pointer). Languages that support classes almost always support inheritance. This allows classes to be arranged in a hierarchy that represents "is-a-type-of" relationships. For example, class Employee might inherit from class Person. All the data and methods available to the parent class also appear in the child class with the same names. For example, class Person might define variables "first_name" and "last_name" with method "make_full_name()". These will also be available in class Employee, which might add the variables "position" and "salary". This technique allows easy re-use of the same procedures and data definitions, in addition to potentially mirroring real-world relationships in an intuitive way. Rather than utilizing database tables and programming subroutines, the developer utilizes objects the user may be more familiar with: objects from their application domain. Subclasses can override the methods defined by superclasses. Multiple inheritance is allowed in some languages, though this can make resolving overrides complicated. Some languages have special support for mixins, though in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes. For example, class UnicodeConversionMixin might provide a method unicode_to_ascii() when included in class FileReader and class WebPageScraper, which don't share a common parent. Abstract classes cannot be instantiated into objects; they exist only for the purpose of inheritance into other "concrete" classes that can be instantiated. In Java, the final keyword can be used to prevent a class from being subclassed. The doctrine of composition over inheritance advocates implementing has-a relationships using composition instead of inheritance. For example, instead of inheriting from class Person, class Employee could give each Employee object an internal Person object, which it then has the opportunity to hide from external code even if class Person has many public attributes or methods. Some languages, like Go do not support inheritance at all. The "open/closed principle" advocates that classes and functions "should be open for extension, but closed for modification". Delegation is another language feature that can be used as an alternative to inheritance. Polymorphism Subtyping – a form of polymorphism – is when calling code can be agnostic as to which class in the supported hierarchy it is operating on – the parent class or one of its descendants. Meanwhile, the same operation name among objects in an inheritance hierarchy may behave differently. For example, objects of type Circle and Square are derived from a common class called Shape. The Draw function for each type of Shape implements what is necessary to draw itself while calling code can remain indifferent to the particular type of Shape being drawn. This is another type of abstraction that simplifies code external to the class hierarchy and enables strong separation of concerns. Open recursion In languages that support open recursion, object methods can call other methods on the same object (including themselves), typically using a special variable or keyword called this or self. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof. OOP languages Simula (1967) is generally accepted as being the first language with the primary features of an object-oriented language. It was created for making simulation programs, in which what came to be called objects were the most important information representation. Smalltalk (1972 to 1980) is another early example, and the one with which much of the theory of OOP was developed. Concerning the degree of object orientation, the following distinctions can be made: Languages called "pure" OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. They were designed specifically to facilitate, even enforce, OO methods. Examples: Ruby, Scala, Smalltalk, Eiffel, Emerald, JADE, Self, Raku. Languages designed mainly for OO programming, but with some procedural elements. Examples: Java, Python, C++, C#, Delphi/Object Pascal, VB.NET. Languages that are historically procedural languages, but have been extended with some OO features. Examples: PHP, Perl, Visual Basic (derived from BASIC), MATLAB, COBOL 2002, Fortran 2003, ABAP, Ada 95, Pascal. Languages with most of the features of objects (classes, methods, inheritance), but in a distinctly original form. Examples: Oberon (Oberon-1 or Oberon-2). Languages with abstract data type support which may be used to resemble OO programming, but without all features of object-orientation. This includes object-based and prototype-based languages. Examples: JavaScript, Lua, Modula-2, CLU. Chameleon languages that support multiple paradigms, including OO. Tcl stands out among these for TclOO, a hybrid object system that supports both prototype-based programming and class-based OO. OOP in dynamic languages In recent years, object-oriented programming has become especially popular in dynamic programming languages. Python, PowerShell, Ruby and Groovy are dynamic languages built on OOP principles, while Perl and PHP have been adding object-oriented features since Perl 5 and PHP 4, and ColdFusion since version 6. The Document Object Model of HTML, XHTML, and XML documents on the Internet has bindings to the popular JavaScript/ECMAScript language. JavaScript is perhaps the best known prototype-based programming language, which employs cloning from prototypes rather than inheriting from a class (contrast to class-based programming). Another scripting language that takes this approach is Lua. OOP in a network protocol The messages that flow between computers to request services in a client-server environment can be designed as the linearizations of objects defined by class objects known to both the client and the server. For example, a simple linearized object would consist of a length field, a code point identifying the class, and a data value. A more complex example would be a command consisting of the length and code point of the command and values consisting of linearized objects representing the command's parameters. Each such command must be directed by the server to an object whose class (or superclass) recognizes the command and is able to provide the requested service. Clients and servers are best modeled as complex object-oriented structures. Distributed Data Management Architecture (DDM) took this approach and used class objects to define objects at four levels of a formal hierarchy: Fields defining the data values that form messages, such as their length, code point and data values. Objects and collections of objects similar to what would be found in a Smalltalk program for messages and parameters. Managers similar to IBM i Objects, such as a directory to files and files consisting of metadata and records. Managers conceptually provide memory and processing resources for their contained objects. A client or server consisting of all the managers necessary to implement a full processing environment, supporting such aspects as directory services, security and concurrency control. The initial version of DDM defined distributed file services. It was later extended to be the foundation of Distributed Relational Database Architecture (DRDA). Design patterns Challenges of object-oriented design are addressed by several approaches. Most common is known as the design patterns codified by Gamma et al.. More broadly, the term "design patterns" can be used to refer to any general, repeatable, solution pattern to a commonly occurring problem in software design. Some of these commonly occurring problems have implications and solutions particular to object-oriented development. Inheritance and behavioral subtyping It is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus to infer that objects instantiated from subclasses can always be safely used instead of those instantiated from the superclass. This intuition is unfortunately false in most OOP languages, in particular in all those that allow mutable objects. Subtype polymorphism as enforced by the type checker in OOP languages (with mutable objects) cannot guarantee behavioral subtyping in any context. Behavioral subtyping is undecidable in general, so it cannot be implemented by a program (compiler). Class or object hierarchies must be carefully designed, considering possible incorrect uses that cannot be detected syntactically. This issue is known as the Liskov substitution principle. Gang of Four design patterns Design Patterns: Elements of Reusable Object-Oriented Software is an influential book published in 1994 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, often referred to humorously as the "Gang of Four". Along with exploring the capabilities and pitfalls of object-oriented programming, it describes 23 common programming problems and patterns for solving them. As of April 2007, the book was in its 36th printing. The book describes the following patterns: Creational patterns (5): Factory method pattern, Abstract factory pattern, Singleton pattern, Builder pattern, Prototype pattern Structural patterns (7): Adapter pattern, Bridge pattern, Composite pattern, Decorator pattern, Facade pattern, Flyweight pattern, Proxy pattern Behavioral patterns (11): Chain-of-responsibility pattern, Command pattern, Interpreter pattern, Iterator pattern, Mediator pattern, Memento pattern, Observer pattern, State pattern, Strategy pattern, Template method pattern, Visitor pattern Object-orientation and databases Both object-oriented programming and relational database management systems (RDBMSs) are extremely common in software . Since relational databases don't store objects directly (though some RDBMSs have object-oriented features to approximate this), there is a general need to bridge the two worlds. The problem of bridging object-oriented programming accesses and data patterns with relational databases is known as object-relational impedance mismatch. There are a number of approaches to cope with this problem, but no general solution without downsides. One of the most common approaches is object-relational mapping, as found in IDE languages such as Visual FoxPro and libraries such as Java Data Objects and Ruby on Rails' ActiveRecord. There are also object databases that can be used to replace RDBMSs, but these have not been as technically and commercially successful as RDBMSs. Real-world modeling and relationships OOP can be used to associate real-world objects and processes with digital counterparts. However, not everyone agrees that OOP facilitates direct real-world mapping (see Criticism section) or that real-world mapping is even a worthy goal; Bertrand Meyer argues in Object-Oriented Software Construction that a program is not a model of the world but a model of some part of the world; "Reality is a cousin twice removed". At the same time, some principal limitations of OOP have been noted. For example, the circle-ellipse problem is difficult to handle using OOP's concept of inheritance. However, Niklaus Wirth (who popularized the adage now known as Wirth's law: "Software is getting slower more rapidly than hardware becomes faster") said of OOP in his paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects the structure of systems 'in the real world', and it is therefore well suited to model complex systems with complex behaviours" (contrast KISS principle). Steve Yegge and others noted that natural languages lack the OOP approach of strictly prioritizing things (objects/nouns) before actions (methods/verbs). This problem may cause OOP to suffer more convoluted solutions than procedural programming. OOP and control flow OOP was developed to increase the reusability and maintainability of source code. Transparent representation of the control flow had no priority and was meant to be handled by a compiler. With the increasing relevance of parallel hardware and multithreaded coding, developing transparent control flow becomes more important, something hard to achieve with OOP. Responsibility- vs. data-driven design Responsibility-driven design defines classes in terms of a contract, that is, a class should be defined around a responsibility and the information that it shares. This is contrasted by Wirfs-Brock and Wilkerson with data-driven design, where classes are defined around the data-structures that must be held. The authors hold that responsibility-driven design is preferable. SOLID and GRASP guidelines SOLID is a mnemonic invented by Michael Feathers which spells out five software engineering design principles: Single responsibility principle Open/closed principle Liskov substitution principle Interface segregation principle Dependency inversion principle GRASP (General Responsibility Assignment Software Patterns) is another set of guidelines advocated by Craig Larman. Criticism The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity, and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms). Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. The latter point is reiterated by Joe Armstrong, the principal inventor of Erlang, who is quoted as saying: A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches. Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP; however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS. In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity. Alexander Stepanov compares object orientation unfavourably to generic programming: Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage". Leo Brodie has suggested a connection between the standalone nature of objects and a tendency to duplicate code in violation of the don't repeat yourself principle of software development. Steve Yegge noted that, as opposed to functional programming: Rich Hickey, creator of Clojure, described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent. Eric S. Raymond, a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. Raymond compares this unfavourably to the approach taken with Unix and the C programming language. Rob Pike, a programmer involved in the creation of UTF-8 and Go, has called object-oriented programming "the Roman numerals of computing" and has said that OOP languages frequently shift the focus from data structures and algorithms to types. Furthermore, he cites an instance of a Java professor whose "idiomatic" solution to a problem was to create six new classes, rather than to simply use a lookup table. Formal semantics Objects are the run-time entities in an object-oriented system. They may represent a person, a place, a bank account, a table of data, or any item that the program has to handle. There have been several attempts at formalizing the concepts used in object-oriented programming. The following concepts and constructs have been used as interpretations of OOP concepts: co algebraic data types recursive types encapsulated state inheritance records are basis for understanding objects if function literals can be stored in fields (like in functional-programming languages), but the actual calculi need be considerably more complex to incorporate essential features of OOP. Several extensions of System F<: that deal with mutable objects have been studied; these allow both subtype polymorphism and parametric polymorphism (generics) Attempts to find a consensus definition or theory behind objects have not proven very successful (however, see Abadi & Cardelli, A Theory of Objects for formal definitions of many OOP concepts and constructs), and often diverge widely. For example, some definitions focus on mental activities, and some on program structuring. One of the simpler definitions is that OOP is the act of using "map" data structures or arrays that can contain functions and pointers to other maps, all with some syntactic and scoping sugar on top. Inheritance can be performed by cloning the maps (sometimes called "prototyping"). See also Comparison of programming languages (object-oriented programming) Comparison of programming paradigms Component-based software engineering Design by contract Object association Object database Object model reference Object modeling language Object-oriented analysis and design Object-relational impedance mismatch (and The Third Manifesto) Object-relational mapping Systems CADES Common Object Request Broker Architecture (CORBA) Distributed Component Object Model Distributed Data Management Architecture Jeroo Modeling languages IDEF4 Interface description language Lepus3 UML References Further reading External links Introduction to Object Oriented Programming Concepts (OOP) and More by L.W.C. Nirosh Discussion on Cons of OOP OOP Concepts (Java Tutorials) Programming paradigms Norwegian inventions
16705857
https://en.wikipedia.org/wiki/Surigao%20Education%20Center
Surigao Education Center
Established in 1978, the Surigao Education Center is a private, non-sectarian, non-stock school. Surigao Education Center is located at Km2. National Highway, Surigao City, Philippines. History Surigao Education Center is a non-stock, non-profit private, non-sectarian educational institution incorporated under the laws of the Republic on November 25, 1987. The school was then known as Surigao Medical Center School of Midwifery when it opened as the first Midwifery School in Surigao in 1978. Ten years later, the school changed its name to Surigao Education Center and began to offer new courses as part of the school development plan. The one-year Health Aide course and Pre School Education were initially offered. The Nursing Program together with other new courses was initiated on October 23, 1991. In the school year of 1993-1994, Surigao Education Center opened its Bachelor of Science in Nursing program, Bachelor of Science in Marine Transportation, Basic Seaman Course (now called Able Seafarer Deck & Able Seafarer Engine) and Associate in Radiologic Technology, now offered as Bachelor of Science in Radiologic Technology. Then, in the school year of 1996-1997, the school pioneered again in opening courses in Engineering leading to the degrees of Bachelor of Science in Mechanical and Electrical Engineering, Marine Engineering and Architecture.  Other course offerings followed such as Bachelor of Science in Accountancy and Business Administration, Bachelor of Science in Computer Science, Bachelor of Science in Information Technology. At present, Surigao Education Center has expanded its course offerings to include Bachelor in Elementary Education, Bachelor in Secondary Education, Bachelor of Science in Hotel and Restaurant Management, Bachelor of Science in Civil Engineering, Bachelor of Science in Electronics and Communications Engineering, and Bachelor of Science in Midwifery. Responding to global realities and demand of the labor market, the school implemented the Ladderized Education System. The school’s eight-hectare campus includes eight buildings and a college auditorium that will soon rise.  Separately located from the main campus for training purposes is the twenty five-meters swimming pool for maritime cadets and physical education students and a Mini Hotel for the Bachelor of Science in Hotel and Restaurant Management students.  A hospital is also available for Allied Medical Sciences students’ clinical experience. Academic programs UNDERGRADUATE PROGRAMS COLLEGE OF MARITIME EDUCATION Bachelor of Science in Marine Transportation COLLEGE OF ALLIED MEDICAL SCIENCES Bachelor of Science in Nursing Bachelor of Science in Radiologic Technology Bachelor of Science in Midwifery COLLEGE OF ENGINEERING AND ARCHITECTURE Bachelor of Science in Civil Engineering Bachelor of Science in Mechanical Engineering Bachelor of Science in Electrical Engineering Bachelor of Science in Electronics and Communication Engineering Bachelor of Science in Architecture COLLEGE OF BUSINESS EDUCATION Bachelor of Science in Accountancy Bachelor of Science in Accounting Technology Bachelor of Science in Business Administration Major in Marketing Management Major in Human Resources Management Major in Financial Management Bachelor of Science in Hotel and Restaurant Management COLLEGE OF INFORMATION TECHNOLOGY Bachelor of Science in Information Technology COLLEGE OF TEACHERS EDUCATION Bachelor of Science in Secondary Education Bachelor of Science in Elementary Education HIGH SCHOOL DEPARTMENT JUNIOR HIGH SCHOOL SENIOR HIGH SCHOOL PRE-BACCALAUREATE MARITIME ACADEMIC TRACK Accountancy, Business and Management (ABM) Strand Humanities and Social Science (HUMSS) Strand Science, Technology, Engineering and Mathematics (STEM) Strand General Academic (GA) Strand TECHNICAL - VOCATIONAL LIVELIHOOD TRACK Home Economics Strand Bread and Pastry Production NC II Food and Beverage Services NC II Cookery NC II Caregiving NC II Industrial Arts Strand Carpentry NC II Plumbing NC II Electrical Installation and Maintenance NC II Shielded Metal Arc Welding NC II MONTESSORI DEPARTMENT GRADE SCHOOL DEPARTMENT Campus The college’s eight-hectare campus includes eight buildings and a college auditorium that will soon rise. Separate from the main campus for training purposes is the 25-meter swimming pool for maritime cadets and physical education students and a mini-hotel for the hotel and restaurant management students. A hospital is available for allied medical science student’s clinical experience. Officers and administration Surigao Education Center is under the general management of the Board of Trustees; the President has the immediate authority in all matters and is responsible for the successful management of the institution. Directly under the President are the Research Planning Officer, Quality Management Representative, Vice President for Administration, Vice President for Academics and Vice President for Finance and external Affairs Officer The Quality Management Representative directly reports to the President and serves as the Management Representative responsible for identifying verification requirements and recommends provisions of adequate resources in the quality system for inspection, testing and monitoring of the School’s education processes and for auditing the quality system. The Research Planning Officer also reports directly to the President and has no direct supervision over any of the staff. He plans and develops marketing strategies for the Institution in coordination with the VP Academic. The VP Administration ensures smooth day-to-day operations of the Administrative Department and all other activities related to Administrative functions. He manages all Auxiliary service Officer and the Head for the Plant, Property and Equipment of the school. The VP Academic is in charge of the overall management of the delivery of quality education processes of the institution. He oversees the Academic Deans/Department Heads/Principal of all departments: Basic Education Department, College of Maritime Studies, College of Allied Medical, College of Business Education, College of Information Technology, College of Teacher Education, College of Engineering & Architecture. The Registrar and Librarian are academic non teaching staff directly responsible to the VP- Academics and who are involved to serve student needs. The External Affairs Officer manages the linkages with local, national and international organizations, foundations and alumni. He is also responsible for planning, organizing and coordinating of all programs, services and operations outside the SEC community intended to maintain the cordial relationship between SEC and its graduates through resource-building activities among alumni. Campus life and culture References Universities and colleges in Surigao del Norte Schools in Surigao City
6919166
https://en.wikipedia.org/wiki/.NET%20Micro%20Framework
.NET Micro Framework
The .NET Micro Framework (NETMF) is a .NET Framework platform for resource-constrained devices with at least 512 kB of flash and 256 kB of random-access memory (RAM). It includes a small version of the .NET Common Language Runtime (CLR) and supports development in C#, Visual Basic .NET, and debugging (in an emulator or on hardware) using Microsoft Visual Studio. NETMF features a subset of the .NET base class libraries (about 70 classes with about 420 methods), an implementation of Windows Communication Foundation (WCF), a GUI framework loosely based on Windows Presentation Foundation (WPF), and a Web Services stack based on Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL). NETMF also features added libraries specific to embedded applications. It is free and open-source software released under Apache License 2.0. The Micro Framework aims to make embedded development easier, faster, and less costly by giving embedded developers access to the modern technologies and tools used by desktop application developers. Also, it allows desktop .NET developers to use their skills in embedded systems, enlarging the pool of qualified embedded developers. The Micro Framework is part of the .NET Foundation. Announced at the Build 2014 conference, the foundation was created as an independent forum to foster open development and collaboration around the growing set of open-source technologies for .NET. Features Relative to other .NET platforms, the unique features of the Micro Framework are: Memory needs of about 300 kB; in contrast, the next smallest .NET implementation, the .NET Compact Framework running on Windows CE, needs about 12 MB Can run directly on a bare machine with no operating system, or can run on an operating system (OS) Supports common embedded peripherals and interconnects, including flash memory, EEPROM, GPIO, I²C, Serial Peripheral Interface Bus (SPI), serial port, USB Optimized for energy-efficiency in battery-powered devices Needs no memory management unit Provides multithreading support even when running on single-threaded operating systems A hardware abstraction layer allows porting to other architectures A managed device driver model allows drivers for many devices to be written in C# Execution constraints to catch device lockups and crashes Transparent support for storing objects in non-volatile memory Due to the constraints under which it operates, the Micro Framework does have some limits beyond those imposed by its slimmed-down libraries. For example, the platform does not support symmetric multiprocessing, multidimensional arrays, machine-dependent types, or unsafe instructions. The CLR is an interpreter rather than a just-in-time compiler, and uses a simpler mark-and-sweep garbage collector instead of a generational method. An ahead-of-time compiler is being developed using a modified LLVM compiler. Interoperation between managed and native code currently has several limitations. As of 2011, Micro Framework supported two .NET languages: C# and Visual Basic. Support As of 2013, the .NET Micro Framework was supported on ARM architecture processors (including ARM7, ARM9, and Cortex-M) and has been supported on Analog Devices Blackfin in the past. The Porting Kit is now available along with the source code as a free download under the Apache License 2.0 at the Microsoft Download Center. The Micro Framework has its roots in Microsoft's Smart Personal Objects Technology (SPOT) initiative and was used in MSN Direct products such as smart watches before being made available to third-party developers early in 2007. It is a common platform for Windows SideShow devices and has been adopted in other markets, such as energy management, healthcare, industrial automation, and sensor networks. Microsoft allows developers to create applications using the Micro Framework without charge, and makes a software development kit (SDK) available for free download that can be used with all versions of Visual Studio, including the free Express editions. History In November 2009, Microsoft released the source code of the Micro Framework to the development community as free and open-source software under the Apache License 2.0. In January 2010, Microsoft launched the netmf.com community development site to coordinate ongoing development of the core implementation with the open-source community. On 9 January 2010, GHI Electronics announced FEZ Domino, the first member of the product line called FEZ (Freakin' Easy!), a combination of open-source hardware with a proprietary closed-source version of .NET Micro Framework. On 3 August 2010, Secret Labs announced the Netduino, the first all-open-source electronics platform using the .NET Micro Framework. In February 2011, Novell posted a preview of the Mono 2.12 C# compiler, the first open-source compiler for .NET Micro Framework. On 23 January 2017, after numerous attempts ( and ) to revive .NET Microframework project and bring it to community governance and a period of work "in the dark", a group of embedded systems developers publicly announced .NET nanoFramework as spin-off of .NET Micro Framework. A major rework on the build system, an easier way of adding new targets, a modernized API following UWP, a Visual Studio extension with all the tools required for managing targets, full development experience from coding to debugging on the native code and support for ARM Cortex-M and ESP32 were the key differences at that time. On 12 October 2018 the first official release of the class libraries and firmware images was announced. On 16 December 2016, GHI Electronics announced their own implementation of Micro Framework called TinyCLR OS, citing lack of maintenance of NETMF by Microsoft. On 7 July 2017 GHI announced 5th preview of TinyCLR OS. On 2 February 2018 GHI announced 8th preview of TinyCLR OS. On 5 April 2018 GHI announced 10th preview of TinyCLR OS. Hardware Multiple vendors make chips, development kits, and more that run the Micro Framework. Netduino by Wilderness Labs Netduino is an open-source electronics platform using the Micro Framework. Originally created by Secret Labs, but now manufactured and maintained by Wilderness Labs Inc. Based on 168Mhz Cortex-M4 (STM32F4) with up to 1,408 KB of code storage and 164 KB of RAM. On-board USB, Ethernet, Wifi, SD card slot. Development environment is MS Visual Studio and C#. Pin compatible with Arduino shields although drivers are required for some shields. GHI Electronics GHI Electronics makes several modules that support the Micro Framework: EMX Module ChipworkX Module USBizi144 Chipset and USBizi100, whose only difference is the lack of USB host support in the USBizi100 GHI Electronics also makes the .NET FEZ line of very small open-source hardware boards with proprietary firmware, targeted for beginners. They are based on the USBizi chipset and all its features. The FEZ Domino board offers USB host. Even though FEZ is for beginners, it's also a low-cost starting point for professionals wanting to explore NETMF. Some of these boards are physically compatible with the Arduino. Mountaineer boards Mountaineer boards, part of the Mountaineer Group, make a small range of open-source open-hardware boards that make use of the Micro Framework. Mountaineer have ported the Micro Framework for use on the STM32 family of microcontrollers featured on their Mountaineer boards and elsewhere. STMicroelectronics STMicroelectronics, creators of the microcontroller family STM32, make low-cost discovery boards to showcase the controllers, and provides ports of the Micro Framework to run on them. Netmfdevices Netmfdevices is an open-source electronics platform using FEZHacker and .NET Micro Framework. Micromint The Micromint Bambino 200 is the first multi-core processor SBC compatible with the .NET Gadgeteer framework. The model 200 is powered by an NXP LPC4330, the first dual-core ARM Cortex-M microcontroller. Its Cortex-M4 and Cortex-M0 cores are both capable of 204 MHz. It has 264 KB SRAM onboard and 4 MB of flash.The model 200E has all the same features as the model 200, and increased flash memory to 8 MB, 10 Gadgeteer sockets, an Ethernet port, microSD socket, and other features. .NET Gadgeteer devices Several manufacturers make boards and modules compatible with the .NET Gadgeteer rapid-prototyping standard for the framework. MikroBus.Net The MikroBus.Net Quail is an STM32-powered development solution for building hardware prototypes with click boards and C# managed code. It brings together Mikroelectronika click boards and the Micro Framework for embedded devices (NETMF). See also DirectBand .NET Compact Framework .NET Framework References External links Micro Framework Free computer libraries Microsoft free software Software using the Apache license 2007 software
47647184
https://en.wikipedia.org/wiki/Grand%20Theft%20Auto%20modding
Grand Theft Auto modding
User modification, or modding, of games in the open world sandbox Grand Theft Auto series is a popular trend in the PC gaming community. These unofficial modifications are made by altering gameplay logic and asset files within a user's game installation, and can drastically change the gameplay experience. Frequently created by anonymous modders, modifications are presented in the form of downloadable files or archives. Third-party software has been indispensable for building Grand Theft Auto mods, due to the lack of official editing tools from the developer, Rockstar Games. Mods for Grand Theft Auto are generally developed for use on the PC versions of the games, since the platform does not prevent modifications to installed software; however, similar content for console and mobile phone versions does exist to an extent. Background While mods for Grand Theft Auto have been developed by hobbyists since the release of the first game, mods only became more accessible and popular with the release of Grand Theft Auto III on Microsoft Windows in May 2002. The use of a 3D game engine (the first in the series) allowed development of custom vehicles, textures and character models, followed by new missions and map modifications; the success of these new types of mods started to attract widespread attention. In the following years, the modding scene became more sophisticated and complex, as various aspects of the game's internals were gradually being discovered and documented by hackers. One of the best-known examples is the iCEnhancer graphics modification for Grand Theft Auto IV by Hayssam Keilany, praised by reviewers and labelled as "arguably one of the best mods of all time" by Polygon. In most of the games, certain data files were stored in simple archives or plain text files, allowing modders to edit them using basic tools. However, more complex modifications, such as changes to the gameplay mechanics or the addition of custom models and/or maps, were not possible without more advanced tools specific to GTA, along with commercial-grade modelling programs such as SketchUp, 3ds Max and Zmodeler. For this reason, modders would often collaborate on various fan sites, pooling resources and sharing documentation with one another. In a quote by Patrick Wildenborg, the modder responsible for unlocking the Hot Coffee mini-game, he stated that "the modding community felt like a bunch of friends trying to solve a mystery". Online modding communities Grand Theft Auto fan communities have been essential to the growth of the modding scene. Modders were able to exchange knowledge and team up in order to create new tools, mods and documentation. GTA communities, Internet forums, and fan sites have also been essential, as they serve as hosts for mods. Besides YouTube, sites such as GTANet, GTAinside, GTA V Mods and The GTA Place serve as platforms for content exchange and discussion about modding and the Grand Theft Auto series in general. FiveM, an alternative multiplayer and role-playing modification for Grand Theft Auto Online, amassed a concurrent player count of 250,000 on Steam in April 2021, surpassing that of the base game. Earlier in February, Grand Theft Auto V became the most-watched category on Twitch due to an update for NoPixel, one of FiveM's largest servers which costs around $10,000 monthly in hosting costs. LCPDFR and LSPDFR are modifications released for Grand Theft Auto IV and Grand Theft Auto V that convert their respective games into a law enforcement simulator; the mod's developers, G17 Media, also develop RDRFR, a similar law enforcement simulator conversion for Red Dead Redemption 2. As of January 2022, LSPDFR has almost 11 million downloads, and LCPDFR has 2 million downloads; the mods' website, LCPDFR.com, also hosts over 27,000 additional third-party mod files and has over 420,000 registered users. LCPDFR and LSPDFR made news in Australia in 2017, when New South Wales Police Minister Troy Grant denounced the mods on Seven News, calling the ability to install addons based on the NSW Police and harm in-game NSW Police officers or potentially commit police brutality as them "perverse" and "inaccurate". Grant's statements were met with criticism from the LSPDFR community, including a developer of one of the NSW Police mods, who said their content was harmless and that they "inspired kids to think of a career in the emergency services field". Aspects of GTA modding Mods are a part of the Grand Theft Auto franchise's success on PCs. Their popularity added on to the longevity and further success of the GTA series, and complex modifications such as Zombie Alarm and LSPDFR offer entirely new experiences. Modding served as one of main channels for innovations in gameplay. The best example of this is arguably Grand Theft Auto: San Andreas, which boasts one of the largest modding communities in PC gaming. New modifications are still released for the game decades after its release. When Grand Theft Auto V was released on PC in 2015, the breakthrough of new mods depended on the creation of new GTA V-specific modification tools. While GTA IV already had many mods and tools due to its age, GTA V modders had difficulties creating mods until completely new tools were made. One of the most notable tools created was OpenIV, a file exploring and editing program allowing for easy manipulation of the game files. As GTA Online is built as a component of GTA V, it has been a subject of mods which intentionally caused negative effects on players' experiences. Because many mods in the form of in-game cheats were added to GTA Online, Rockstar developed an isolated section where those who used cheats in GTA Online would temporarily be placed. Prior to Grand Theft Auto IV and GTA Online, mods such as San Andreas Multiplayer and Multi Theft Auto were developed in lieu of an official multiplayer component for previous titles. Although major mod hosting websites (such as GTAinside or GTAGarage) often check mods for possible malware, content infected with viruses and rogue software occasionally slip through. As such, modding websites began to screen content owners and their works for any malicious content or similar quality or security issues. Reaction from Rockstar Games Modification of Grand Theft Auto is not endorsed by Rockstar and as such there is no official editor tool that allows manipulation of in-game files. In an answer to a question made by a fan, Rockstar expressed their views on GTA modification, stating that they have always appreciated the efforts of the modding community and still gladly remember classic mods like Zombie Invasion or the original Grand Theft Auto III map on Grand Theft Auto IV. They also declared that their modding policy hasn't changed and is same as for GTA IV. The end-user licence agreement contradicts this however, as it states users may not "reverse engineer, decompile, disassemble, display, perform, prepare derivative works based on, or otherwise modify the Software, in whole or in part". For the release of Grand Theft Auto V for PC, Rockstar indicated that its primary focus was ensuring that Grand Theft Auto Online was free of possible hacks and exploits, and that therefore modding would not be allowed in Online. In August 2015, several members of the FiveM team had their Rockstar Games Social Club accounts suspended due to their involvement in an alternative multiplayer client for Grand Theft Auto V. Upon being contacted by Kotaku regarding the issue, Rockstar stated that the FiveM client was an unauthorized modification "designed to facilitate piracy" and that as such, it violated the terms of use and therefore members were banned from the Social Club service. While Rockstar has previously provided some support with the original Grand Theft Auto and Grand Theft Auto 2, and even used a third-party utility for developing the Grand Theft Auto: London expansion packs, the only official modification tool Rockstar has released is Rockstar Editor, a tool which allows users to record and edit videos in-game. Options such as effects, audio, speed, cameras, and navigation tools are made available to the player. Impact of the Hot Coffee mod Hot Coffee is a normally inaccessible mini-game in Grand Theft Auto: San Andreas. The mini-game portrays crudely animated sexual intercourse between the main character and a chosen partner. After Patrick Wildenborg, a software engineer who also went by the alias "PatrickW", modified the game to make the mini-game accessible, Hot Coffee quickly gained notoriety worldwide, impacting consumer culture, politics and the video game industry as a whole. Rockstar initially blamed a "determined group of hackers" for hacking the base game and creating the mini-game from scratch. This claim was eventually refuted, as the mini-game's code and assets had been developed by Rockstar and were already present, unfinished and abandoned, on the game disc: the mod simply made the existing content available to players. Rockstar would go on to indicate that they expected the ESRB rating to remain unchanged, as they had no control over the modifications applied to the game post-release. However, the ESRB chose to have the rating changed to Adults Only, with the modding community taking the blame. The ESRB later called on the video game industry to "proactively protect their games from illegal modifications by third parties, particularly when they serve to undermine the accuracy of the rating". In his interview for CNET, David Kushner explained that some modders were "scared" that Rockstar would prevent modifications to their games after Hot Coffee, but noted that once the controversy had passed the outcome was of refined ESRB guidelines rather that increased regulation, and a renewed public appreciation for mature content in video games. Of the controversy and eventual fallout, Gamsutra wrote that "The treatment left many in the GTA mod community with mixed feelings". Other legal issues A weapon replacement for GTA V, which replaces the game's sticky bomb weapon with an exploding Samsung Galaxy Note 7 as a parody of the phone's battery explosion issue and subsequent recall, attracted controversy when Samsung Electronics America, Inc. reportedly issued takedown notices to YouTube demanding that videos depicting the phone be taken down as an alleged violation of its copyrights. The Verge called this "bogus", and a "ridiculous overreach and misuse of the DMCA", while it was also believed that these takedowns would only draw further attention to the content. Although Samsung has attempted to take down videos depicting the mod, it has not targeted websites hosting the mod itself. On 14 June 2017, Take-Two Interactive sent a cease and desist to the developers of OpenIV, a program that allows users to install modifications for various Rockstar titles such as Grand Theft Auto IV, Max Payne 3 and Grand Theft Auto V, claiming that OpenIV allowed third-parties to modify and defeat the security features of its software. Rockstar later responded by saying that "Take-Two's actions were not specifically targeting single player mods... We are working to figure out how we can continue to support the creative community without negatively impacting our players." In a statement issued by Rockstar on 23 June, saying that Take-Two has agreed not to take legal action against third-party single-player modding projects involving Rockstar's games on PC. Rockstar has also contacted the developers of OpenIV, in an effort to resolve the dispute. Although the details of said meeting remain undisclosed, shortly afterwards OpenIV was made available to download – and received a minor update – indicating the conversation with Rockstar was successful. The cease and desist led to a review bomb from users, and caused a chilling effect with some mod developers discontinuing support for the game due to perceived legal ramifications with modding Grand Theft Auto V. On 19 February 2021, representatives claiming to be from Take-Two Interactive sent DMCA takedown notices to the team behind re3 and reVC, which were source code recreations of Grand Theft Auto III and Grand Theft Auto: Vice City respectively, whose goal was a modernised and improved version of the games which can be ported and played on contemporary and future platforms. The representatives alleged that the source code recreations were an act of copyright infringement, stating that "The work is not licensed in any way," and had their respective GitHub repositories taken down as a result. Project leader aap cited the Super Mario 64 source code recreation as a precedent for the source code projects, noting that the Super Mario 64 repository is, as of 2021, still available despite Nintendo's reputation for aggressive litigation against copyright infringers. In April, Theo, a New Zealand-based developer who maintained a fork of the source code, filed a counter-notice on GitHub, claiming that the code does not contain any original work owned by Take-Two; per DMCA rules regarding disputes, Theo's content was restored after two weeks, though the original repository and other forks remained unavailable. In September 2021, Take-Two filed a lawsuit in California against the programmers, asserting that they "are well aware that they do not possess the right to copy, adapt, or distribute derivative GTA source code, or the audiovisual elements of the games, and that doing so constitutes copyright infringement". References Fan labor Modding Unofficial adaptations Video game mods
81937
https://en.wikipedia.org/wiki/Acamas
Acamas
Acamas or Akamas (; Ancient Greek: Ἀκάμας, folk etymology: 'unwearying') was a name attributed to several characters in Greek mythology. The following three all fought in the Trojan War, and only the first was not mentioned by Homer. Acamas or Acamans, a Cyclops that lived in the company of Pyracmon or Pyragmon in Pelorum (north-east coast of Sicily). Acamas, son of Theseus, mentioned by Virgil as being in the Trojan horse. Acamas, one of the Thebans who laid an ambush for Tydeus when he returned from Thebes. He was killed by Tydeus. Acamas, an Aetolian in the army of the Seven against Thebes. Acamas, a soldier in the army of the Seven against Thebes. When the two armies attack each other at the gates of the city, the hard-hearted Acamas pierces the Theban horseman Iphis. Acamas, son of Antenor, fought on the side of the Trojans and killed one Greek. Acamas, son of Eussorus, from Thrace, and thus, could be the brother of Aenete and Cyzicus. With his comrade Peiros, son of Imbrasus, Acamas led a contingent of Thracian warriors to the Trojan War. Acamas was killed by Ajax or by Idomeneus who thrust him out of his chariot and caught him, as he fell, on the tip of his spear. Acamas, one of the Suitors of Penelope who came from Dulichium along with other 56 wooers. He, with the other suitors, was slain by Odysseus with the aid of Eumaeus, Philoetius, and Telemachus. Acamas, one of Actaeon's dogs. Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853-1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project. Apollonius Rhodius, Argonautica. George W. Mooney. London. Longmans, Green. 1912. Greek text available at the Perseus Digital Library. Dares Phrygius, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at theio.com Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Gaius Valerius Flaccus, Argonautica translated by Mozley, J H. Loeb Classical Library Volume 286. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at theio.com. Gaius Valerius Flaccus, Argonauticon. Otto Kramer. Leipzig. Teubner. 1913. Latin text available at the Perseus Digital Library. Graves, Robert, The Greek Myths, Harmondsworth, London, England, Penguin Books, 1960. Graves, Robert, The Greek Myths: The Complete and Definitive Edition. Penguin Books Limited. 2017. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. The Orphic Argonautica, translated by Jason Colavito. Copyright 2011. Online version at the Topos Text Project. Publius Papinius Statius, The Thebaid translated by John Henry Mozley. Loeb Classical Library Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1928. Online version at the Topos Text Project. Publius Papinius Statius, The Thebaid. Vol I-II. John Henry Mozley. London: William Heinemann; New York: G.P. Putnam's Sons. 1928. Latin text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Cyclopes Achaeans (Homer) Trojans Characters in the Iliad Suitors of Penelope Characters in Greek mythology Greek masculine given names Given names of Greek language origin
6861397
https://en.wikipedia.org/wiki/Marine%20Wing%20Communications%20Squadron%2028
Marine Wing Communications Squadron 28
Marine Wing Communications Squadron 28 (MWCS-28) is a United States Marine Corps communications squadron. The squadron provides expeditionary communications for the aviation combat element of the II Marine Expeditionary Force. They are based at Marine Corps Air Station Cherry Point and fall under the command of Marine Air Control Group 28 and the 2nd Marine Aircraft Wing. Mission Provide expeditionary communications for the aviation combat element (ACE) of a Marine Expeditionary Force (MEF), including the phased deployment element of tasked-organized elements thereof. Tasks Provide for the effective command of subordinate elements. Assist in the systems planning and engineering of ACE communications; and install, operate, and maintain expeditionary communications for command and control of the MEF ACE. Provide operational System Control Centers, as required, to coordinate communication functions internally and externally to the ACE. Provide calibration and repair facility for all ground common Test Measurement Diagnostic Equipment (TMDE) in the Marine Aircraft Wing (MAW). Provide maintenance support for Ground Common Communications Equipment in the MAW. Provide the digital backbone communications support for the ACE Command Element (CE), forward operating bases, and Marine Air Command and Control System (MACCS) agencies for up to two airfields per detachment. Provide tactical, automated switching, and telephone services for the ACE CE, and Tactical Air Command Center (TACC). Provide electronic message distribution for the ACE CE, primary MACCS agencies, and tenant units. Provide external, single-channel radio and radio retransmission communications support for ACE operations, as required. Provide deployed Wide Area Network, and deployed Local Area Network server support for the ACE CE, and primary MACCS agencies. Provide the Support Cryptographic Site (SCS), for all Ground Common and MACCS assigned Communication Security Equipment within the ACE. Plan and coordinate individual and unit training, as required to qualify subordinate detachments for tactical deployment and combat operations. MWCS-28 Unit History Early History: 1960s and 1970s Throughout the Squadron's early history, MWCS-28 supported a variety of large scale, Joint, and Coalition exercises tying aviation assets and aviation command and control agencies to the Marine Air Ground Task Force. In the context of the Squadron's establishment during the Vietnam War, the Squadron's contributions and participation in these specific types of exercises ensured the 2d Marine Aircraft Wing's and the Corps' ability to provide combat power during major combat operations. Originally designated Marine Wing Communications Squadron 2, MWCS-28 was formed on 1 September 1967. MWCS-2 was organized by aggregating the organic communications sections from Marine Air Control Group 2's subordinate squadrons in to a single, centralized communications element. MWCS-2, under the command of Major Madigan with 5 officers and 55 enlisted Marines, reported to Marine Wing Headquarters Group 2. In 1972, MWCS-2 was re-designated as MWCS-28 and was reassigned to its present higher headquarters, Marine Air Control Group 28. Exercises like "Operation Teamwork 76" in Europe and "Operation Bonded Item 76" in Denmark and West Germany supported the employment of aviation assets to North Atlantic Treaty Organization forces. Additionally, exercises like "Exotic Dancer III" and Exotic Dancer IV", held at Camp Lejeune, North Carolina, were two of the largest military operations outside the Vietnam War. These exercises provided communications support to 60,000 troops from all branches of the armed forces, and supported the Wing's core competencies during major combat operations. On 6 September 1979, a massive fire ruined MWCS-28's Headquarters buildings. Maintaining their focus on mission accomplishment, seven days after the fire, the Squadron embarked on ships bound for Turkey on 13 September to participate in "Display Determination 79." During this exercise, MWCS-28 Marines enabled command and control for Marine Aircraft Group 20. During their first 13 years, MWCS-28 supported core competencies for aviation elements during major combat operations and demonstrated the ability to tailor and scale communications support to enable Joint, Coalition, Wing, and Group-level elements. 1980s During the 1980s, MWCS-28 continued to enable our Corps' actions as a member of the Joint force. The Squadron continued to develop the concepts of employment associated with supporting Wing and Group-level elements while additionally developing the capability to support aviation elements afloat and aviation elements task-organized to support Marine Amphibious Brigades (MAB). The 1980s also saw a significant restructuring for the MWCS-28 organization, and that restructuring remains intact today. In February 1984, MWCS-28 participated in "Operation Ocean Venture 82" at Roosevelt Roads Naval Station, Puerto Rico in support of Marine Aircraft Group 40 and the 4th MAB. Also in February, personnel and equipment were embarked aboard the USS Mt. Whitney, USS Charleston, USS Raleigh, USS Saipan, and the USS El Paso for transit to Northern Norway for participation in exercise "Busy Eagle/Team Work - 84." That same year MWCS-28 participated in five major exercises to include two Combined Arms Exercise (CAX 9/10) at Twenty Nine Palms, California and a Joint Chief of Staff-directed MAB-level exercise conducted in Florida ("Bold Eagle 86"), which successfully demonstrated the Marine Corps' capability to interface data communications with Air Force systems. In 1985, the Squadron executed the 2d Marine Aircraft Wing's Communication Reorganization, directed in 1981 by the 2d Marine Aircraft Wing Commanding General, Major General Keith Smith. During this reorganization, the communications detachments supporting the Marine Air Groups at MCAS New River, NC (Detachment Alpha) and MCAS Beaufort, SC (Detachment Bravo) were assigned to the operational control of MWCS-28 Headquarters located at MCAS Cherry Point. This reorganization centralized the Wing's capability to provide communications links from the MAW Headquarters to each of the 2d Marine Aircraft Wing's Marine Aircraft Groups (MAGs) at each of 2d MAW's Air Stations. MWCS-28's experience in providing communication support to Joint, Coalition, and Wing-level elements would pay dividends in the 1990s. The exercises supported by the Squadron during the 1980s developed the concepts of employment associated with providing smaller, task-organized detachments to Brigades, Groups, and amphibious assets. Operation DESERT SHIELD / Operation DESERT STORM The successes Detachments "A" and "B" achieved during their support of numerous exercises in the 1960s, 1970s, and 1980s would be leveraged for greater success in support of major combat operations during Operation DESERT SHIELD and Operation DESERT STORM. In August 1990, Iraq invaded the sovereign nation of Kuwait, and MWCS-28, Detachment A (-) immediately deployed with Marine Aircraft Group 40 aboard the USS Nassau, an LHA-4, and the USS Iwo Jima, an LHA-2. Four months later, on 10 December, Detachment B (Fwd) deployed to the Persian Gulf for Al-Jubail, Saudi Arabia. Detachment B (Fwd) remained at Al-Jubail until 26 December when they moved to Ras al-Mishab where they established two multichannel links - linking General Support Group (GSG) to the Wing's Tactical Air Operations Center, and also linking the Tactical Air Operations Center to the Direct Support Center. On 1 Jan 1991, the remainder of Detachment B arrived in Ras al-Mishab, and prepared to support Marine Aircraft Group 26's movement to Lonesome Dove, the largest rotary aircraft base in Marine Corps history, housing approximately 200 aircraft. Detachment B established links between Marine Aircraft Group 26 and the Early Warning Center and Direct Support Center. These links tied together the Marine Air Command and Control System in support of air campaign and ground war during Operation Desert Shield and Operation Desert Storm. Throughout the conduct of the ground war, Detachment A (-) remained embarked while preparing to support aviation operations ashore during an amphibious assault. The 1990s: Resetting the Force and Continued Support to Global Security In the 1990s, small detachments of MWCS-28 Marines were deployed all over the world, while the Squadron continued to restructure itself in order to provide the highest degree of communications support to 2d MAW and the Wing's task-organized element. On 23 November 1991, Detachment A sent 9 Marines to support the Marine Corps' humanitarian assistance operations in Haiti during Operation GITMO. And, on 11 December 1992, 11 MWCS-28 Marines deployed to Somalia with I Marine Expeditionary Force in support of Operation RESTORE HOPE. In 1992, MWCS-28's Commanding Officer, LtCol Washburn, implemented the Marine Corps' directed Table of Organization change, eliminating the two Detachments, each located at MCAS Beaufort and MCAS New River. The deactivation of Detachment A and Detachment B at their respective Air Stations was conducted in order to consolidate the Squadron's equipment and personnel at MCAS Cherry Point. During this period, the Squadron continued to support 2d MAW's elements during the normal tempo of operations, though it was at 50% of its original strength. In 1993, the Squadron sent 92 Marines to Fort McCoy, Wisconsin for participation in "Alpine Warrior 93", supported Weapons and Tactics Instructor Course 1–93 in Yuma, Arizona, deployed 88 Marines to Puerto Rico for two weeks to support a Command Post Exercise, and then a majority of the Squadron deployed with MAG-40 to Norway for "Battle Griffin 93" in February. On 21 September 1994, the Squadron completed the activities associated with consolidating its two detachments at MCAS Cherry Point, and officially reorganized as a Headquarters Detachment and two operational Detachments, all located at MCAS Cherry Point. This consolidation allowed the Squadron to supervise the collective training development and to manage the force provision of the two independent detachments. During the course of the next year, the Squadron participated in numerous exercises to include two Joint Systems Training Exercises JSTE 95-01/02, CAX 3/4-95, "Iron Ace 95", and "Roving Sands 95". In November 1999, after supporting four real-world operations, 18 major exercises, 19 local exercises, and supporting six Marine Expeditionary Unit-Special Operations Capable detachments, MWCS-28 was presented the Meritorious Unit Citation (MUC) by CG 2d MAW, MajGen Krupp. The 1990s saw the Squadron's participation in an ever-increasing variety of missions. Ranging from the arid deserts of the American Southwest, to the arctic tundra of Norway, the rainforests of South America, and the littorals of the Mediterranean, MWCS-28 continued to develop new concepts of employment to support 2d MAW and the Marine Corps. Also engaged in supporting new technologies, in 1995 MWCS-28 implemented routers as elements of the communications infrastructure to support providing wide area network (WAN) and local area network (LAN) services to 2d MAW's users. The New Century: Y2K, Operation IRAQI FREEDOM and Operation ENDURING FREEDOM The beginning of the 21st Century saw MWCS-28 continue to adjust their organizational structure to provide the highest degree of operational readiness and support to 2d MAW and the II MEF MAGTF. The Squadron also continued to support the implementation and operation and maintenance of new and emerging technologies. At the turn of the century, 9 Marines from MWCS-28, Alpha Company, supported the Year 2000 (Y2K) Response Cell. These teams ensured that the Marines Corps continued to have positive communication with MCAS Cherry Point, NC and MCAS Beaufort, SC during the Y2K transition. These 9 Marines tested over 2,500 personal computers, 100 servers, 15 routers, and 60 switches for Y2K deficiencies. On the first day of the year 2000, MWCS-28 officially re-designated Alpha, Bravo, and Headquarters and Service Detachments as Alpha, Bravo, and Headquarters & Service Companies. From 5 May to 10 June 2000, B Company (Rein) deployed communications equipment and personnel to Hellenic Army Base (Camp Sparta), Assiros Greece, and Macedonia Airfield, Thessinoliki Greece to support MAG-26 during "Dynamic Mix 00". During the exercise, 115 Marines were deployed to four geographically separate sites where the Marines provided links on 2d MEB's communications architecture. AN/TRC-170s and AN/MRC-142s provided long haul connectivity, while AN/TTC-42s and SB-3865s provided telephone and switching services. On 22 April 2002, following the 11 September 2001 attacks, 9 MWCS-28 Marines deployed in support of Operation ENDURING FREEDOM (OEF). Again, on 27 August 2002, 4 more Marines deployed with MACS-2 to support OEF. Operation IRAQI FREEDOM In early 2003, in support of Operation IRAQI FREEDOM, MWCS-28 provided Marines for MAG-29, as part of Task Force Tarawa and ATF East. MWCS-28 also provided a 10-Marine communications detachment to Marine Aviation Logistics Squadron 14 for duty aboard the East Coast T-AVB, the SS Wright, T-AVB-3. Arriving in the Middle East in February, the majority of the Squadron deployed to and staged at Ali Al Salem Airbase while the Operations Section co-located with 3d MAW G-6 and MWCS-38 at Al Jaber Airbase. During March, the Squadron began to reposition itself for movement from Kuwait into Iraq. The Squadron provided a detachment to the Jalibah Forward Operating Base (JFOB), the QFSARP detachment, and the AFOB detachment. On 21 March 2003, the JFOB advance party, JFOB main body, and the QFARP detachment crossed the border moved into Iraq with their respective convoys. The Squadron Commanding Officer, LtCol Kenyon M. Gill, moved with the JFOB Main Body to its destination in Jalibah. The JFOB Detachment began establishing its network at Jalibah via TRC-170, and within 14 hours provided the commander's information requirements. The network included 6 multi-channel AN/TRC-170 links, 5 multi-channel AN/MRC-142 links, 5 tactical telephone switches, 180 tactical phones, 645 SIPRNET accounts, 860 NIPRNET accounts, and 23 radio nets. MWCS-28 provided communications support to MAGs 29 and 16 along with their subordinate Squadrons and MWSS-373, MALS, VMU, EWC, and ATC. The QFARP Detachment followed in trace of Task Force Tarawa. This detachment established a critical AN/TRC-170 retransmission site, between Jalibah and An Numaniyah while providing services to VMU-1 on two separate occasions, and to MWSS-371. LtCol Kenyon M. Gill and the AFOB Detachment followed in trace of the 1st Marine Division. On 25 March, the detachment stopped at FOB An Numaniyah, Iraq, and within 11 hours the AFOB detachment installed 2 AN/TRC-170s to Qalat Sikar, 2 to Salman Pak, and 1 to Task Force Tarawa; 7 AN/MRC-142s links, 4 tactical telephone switches, 96 tactical phones, 400 SIPRNET accounts, and over 60 NIPRNET accounts. The AFOB Marines provided support to MAGs 39 and 13 and their subordinate Squadrons, MWSS-271, MALS, EWC, ATC, VMU 1 and 2, Patriot HQs Battery, 1st FSSG Med Bn, 8th ESB, SEAL Team 3, and CSSB-12. MWCS-28's participation in Operation IRAQI FREEDOM during 2003 provided three crucial and timely communications nodes at Jalibah, Qalat Sikar, and An Numaniyah for 3d MAW, TF Tarawa, and 1st Marine Division in order to enable the command and control for the Marine Corps during major combat operations. Following combat operations in 2003, from 2004 to 2008, MWCS-28 consistently deployed more than 50% in support of OIF 04-06, 05-07, 06-08, 07-09. During this period, the Squadron continued to support 2d MAW training exercises, Marine Expeditionary Units, and Weapons and Tactics Instructor Courses. In 2007, MWCS-28 was recognized as the most outstanding communications unit in the Marine Corps, receiving the LtCol Kevin M. Shea award, only the second unit in the Marine Corps to receive this distinction. In 2009, MWCS-28 simultaneously provided support to both Operation IRAQI FREEDOM and Operation ENDURING FREEDOM. In January, Company A deployed to support Operation IRAQI FREEDOM, and in April, Company B deployed in support of MEB-Afghanistan to support of the expansion of the Operation ENDURING FREEDOM mission. Alpha Company returned in October ending six straight years of MWCS-28 support to Operation IRAQI FREEDOM. And in November 2009, the second increment of Company B personnel deployed to Operation ENDURING FREEDOM 10.2. As in recent and distant years past, the Squadron, while supporting the Marine Corps' operational commitments in support of Operation IRAQI FREEDOM and Operation ENDURING FREEDOM, continued to support aviation logistics exercises (T-AVB), Marine Expeditionary Units, MACCS Integrated Simulated Training Exercises (MISTEXs), as well as Weapons and Tactics Instructor Courses, and Wing-level Large Scale Exercises. Marine Aircraft Group 40 In September 2012, 80 MWCS-28 Marines deployed to Kuwait and Bahrain as part of MAG-40. MAG-40 was a composite air group of 500 Marines consisting of F/A 18D and KC-130 Squadrons with a Marine Wing Support Squadron and Marine Aviation Logistics element, which all supported Theater Security Cooperation activities for the USCENTCOM commander. The Squadron's Marines installed a Standard Tactical Entry Point (STEP) communication node that provided the gateway to secure and non-secure web, email, and voice services to MAG-40 and for elements entering the Persian Gulf region in support of the USCENTCOMM commander. Operation ENDURING FREEDOM From November 2010 until January 2013, MWCS-28 provided forces to Operation ENDURING FREEDOM. As the mission in support of Operation ENDURING FREEDOM expanded, the size of the detachments expanded, and as the mission contracted so did the detachment's size. In February 2011, Company A deployed 120 Marines as part of Operation ENDURING FREEDOM 11.1, Company B supported the OEF 11.2 Detachment with 120 Marines, Company B supported the OEF 13.1 Detachment with 33 Marines, and in January 2013, Company A deployed 33 Marines in July as part of the last MWCS-28 detachment supporting Operation ENDURING FREEDOM, MWCS-28 OEF Det 13.2. During a majority of the Squadron's deployment time in support of Operation ENDURING FREEDOM, the Detachments provided the Regional Command (Southwest)'s alternate STEP access for the MAGTF's area of operations. In addition to supporting 2d MAW elements and airfields at FOBs Camp Leatherneck, Camp Dwyer, and Kandahar Airfield, the Squadron's Marines supported thousands of users, combat missions, and sorties for members of ground forces, combat service support forces, interagency, Joint, and coalition elements. Special Purpose Marine Air Ground Task Force – Crisis Response In July 2013, MWCS-28 deployed a 29-Marine detachment in support of Special Purpose Marine Air Ground Task Force-Crisis Response (SPMAGTF-CR) Command Element. SPMAGTF-CR was staged in Moron, Spain in response to the 2012 attack on the American Embassy in Benghazi, Libya. Operating outside of their traditional MWCS-28 mission, these Marines gave the SPMAGTF-CR the ability to install a forward command element in support of Theater Security Cooperation, Embassy Reinforcement, and Non-combatant Evacuation Operations. The MWCS-28 Marines provided secure and non-secure web, email, and voice services via a Standard Tactical Entry Point to extend the digital backbone to Forward Arming and Refueling Points, Forward Operating Bases, and Logistical Support Areas established by the ACE. Marines from the MWCS-28 detachment also deployed with the ground force within Spain, France and with forward elements posted at American embassies in Northern Africa and a forward command post in Italy. Enabling the Combat Power of the MAGTF From 2010 to 2013, countless Marines have acted as individual augments in support of various operational commitments and units in the USCENTCOM and USAFRICOM AOR. These units include MACG-28 (Fwd), 2d MAW (Fwd), MEF Headquarters Group (Fwd), II MEF (Fwd), Marine Forces Central Command (Fwd), and Redeployment and Retrograde in Support of Reset and Reconstitution Operations Group (R4OG), and Regimental Combat Team 2. In support of enduring commitments in the USAFRICOM AOR, one Marine was deployed to Monrovia, Liberia for Operation ONWARD LIBERTY. As the Squadron has continued to maintain forces deployed in support of Operation ENDURING FREEDOM, Marine Expeditionary Units, and SPMAGTFs, the Marines of MWCS-28 continually provide the most available Forces and high-degree of communications availability in order to enable the combat power of the MAGTF. In February 2014, MWCS-28 was recognized as the most outstanding communications unit in the Marine Corps, receiving the LtCol Kevin M. Shea award for the third time. MWCS-28 was the only unit in the Marine Corps to receive this recognition three times, and the only unit to earn it in back-to-back years. See also United States Marine Corps Aviation Organization of the United States Marine Corps List of United States Marine Corps aviation support units Notes The Marines of MWCS-28 hold a bi-annual "Spartan Cup", a day long event of combined athletics and tests of strength. References Bibliography Web MWCS-28’s official website Comm28
17287225
https://en.wikipedia.org/wiki/Crossloop
Crossloop
CrossLoop was a remote desktop service from CrossLoop Inc., which allowed users to share their computer screens and collaborate with others over the Internet. On July 6, 2012, AVG Technologies acquired CrossLoop Inc., and on January 31, 2014 it shut down the www.crossloop.com website and all former CrossLoop Inc. services, including its remote control screen sharing software products. AVG Technologies did not give a reason for this shut down, only commenting that it was a business decision. Software Products CrossLoop Pro – subscription-based collection of tools for remote access and customer management CrossLoop Remote Access – subscription-based remote access software CrossLoop Free – A free screen-sharing and collaboration tool. Marketplace Service The CrossLoop Marketplace was a place where people could get remote IT support 24 hours a day from a global network of experts through secure screen sharing technology. Individuals and businesses could get assistance with computers, mobile devices, peripherals, software and training. On January 31, 2014, this service was shut down by AVG Technologies. Technology UltraVNC (formerly used TightVNC) on Windows and Chicken of the VNC on a Macintosh is used to establish a connection between computers, and a separate application to handle the billing of services between the two endpoints. CrossLoop encrypts the data sent across the computers. Data is encrypted using a 128-bit Blowfish encryption algorithm. The service might work even if one or both of the computers are behind a firewall. Competitors RealVNC Splashtop Business & Enterprise LogMeIn GoToMyPC GoToAssist TeamViewer Bomgar See also Tom Rolander Comparison of remote desktop software Remote Desktop Protocol (RDP) Terminal Services Virtual Network Computing (VNC) References External links VentureBeat IndustryStandard AllThingsD Sept. 25, 2008 Review Virtual Network Computing Remote desktop Remote administration software
54695
https://en.wikipedia.org/wiki/RAID
RAID
RAID (; "redundant array of inexpensive disks" or "redundant array of independent disks") is a data storage virtualization technology that combines multiple physical disk drive components into one or more logical units for the purposes of data redundancy, performance improvement, or both. This was in contrast to the previous concept of highly reliable mainframe disk drives referred to as "single large expensive disk" (SLED). Data is distributed across the drives in one of several ways, referred to as RAID levels, depending on the required level of redundancy and performance. The different schemes, or data distribution layouts, are named by the word "RAID" followed by a number, for example RAID 0 or RAID 1. Each scheme, or RAID level, provides a different balance among the key goals: reliability, availability, performance, and capacity. RAID levels greater than RAID 0 provide protection against unrecoverable sector read errors, as well as against failures of whole physical drives. History The term "RAID" was invented by David Patterson, Garth A. Gibson, and Randy Katz at the University of California, Berkeley in 1987. In their June 1988 paper "A Case for Redundant Arrays of Inexpensive Disks (RAID)", presented at the SIGMOD Conference, they argued that the top-performing mainframe disk drives of the time could be beaten on performance by an array of the inexpensive drives that had been developed for the growing personal computer market. Although failures would rise in proportion to the number of drives, by configuring for redundancy, the reliability of an array could far exceed that of any large single drive. Although not yet using that terminology, the technologies of the five levels of RAID named in the June 1988 paper were used in various products prior to the paper's publication, including the following: Mirroring (RAID 1) was well established in the 1970s including, for example, Tandem NonStop Systems. In 1977, Norman Ken Ouchi at IBM filed a patent disclosing what was subsequently named RAID 4. Around 1983, DEC began shipping subsystem mirrored RA8X disk drives (now known as RAID 1) as part of its HSC50 subsystem. In 1986, Clark et al. at IBM filed a patent disclosing what was subsequently named RAID 5. Around 1988, the Thinking Machines' DataVault used error correction codes (now known as RAID 2) in an array of disk drives. A similar approach was used in the early 1960s on the IBM 353. Industry manufacturers later redefined the RAID acronym to stand for "redundant array of independent disks". Overview Many RAID levels employ an error protection scheme called "parity", a widely used method in information technology to provide fault tolerance in a given set of data. Most use simple XOR, but RAID 6 uses two separate parities based respectively on addition and multiplication in a particular Galois field or Reed–Solomon error correction. RAID can also provide data security with solid-state drives (SSDs) without the expense of an all-SSD system. For example, a fast SSD can be mirrored with a mechanical drive. For this configuration to provide a significant speed advantage, an appropriate controller is needed that uses the fast SSD for all read operations. Adaptec calls this "hybrid RAID". Standard levels Originally, there were five standard levels of RAID, but many variations have evolved, including several nested levels and many non-standard levels (mostly proprietary). RAID levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard: RAID 0 consists of striping, but no mirroring or parity. Compared to a spanned volume, the capacity of a RAID 0 volume is the same; it is the sum of the capacities of the drives in the set. But because striping distributes the contents of each file among all drives in the set, the failure of any drive causes the entire RAID 0 volume and all files to be lost. In comparison, a spanned volume preserves the files on the unfailing drives. The benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of drives because, unlike spanned volumes, reads and writes are done concurrently. The cost is increased vulnerability to drive failures—since any drive in a RAID 0 setup failing causes the entire volume to be lost, the average failure rate of the volume rises with the number of attached drives. RAID 1 consists of data mirroring, without parity or striping. Data is written identically to two or more drives, thereby producing a "mirrored set" of drives. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning. RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive. This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2), it is not used by any commercially available system. RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is not commonly used in practice. RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP. The main advantage of RAID 4 over RAID 2 and 3 is I/O parallelism: in RAID 2 and 3, a single read I/O operation requires reading the whole group of data drives, while in RAID 4 one I/O read operation does not have to spread across all data drives. As a result, more I/O operations can be executed in parallel, improving the performance of small transfers. RAID 5 consists of block-level striping with distributed parity. Unlike RAID 4, parity information is distributed among the drives, requiring all drives but one to be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks. Like all single-parity concepts, large RAID 5 implementations are susceptible to system failures because of trends regarding array rebuild time and the chance of drive failure during rebuild (see "Increasing rebuild time and failure probability" section, below). Rebuilding an array requires reading all data from all disks, opening a chance for a second drive failure and the loss of the entire array. RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5. RAID 10 also minimizes these problems. Nested (hybrid) RAID In what was originally termed hybrid RAID, many storage controllers allow RAID levels to be nested. The elements of a RAID may be either individual drives or arrays themselves. Arrays are rarely nested more than one level deep. The final array is known as the top array. When the top array is RAID 0 (such as in RAID 1+0 and RAID 5+0), most vendors omit the "+" (yielding RAID 10 and RAID 50, respectively). RAID 0+1: creates two stripes and mirrors them. If a single drive failure occurs then one of the mirrors has failed, at this point it is running effectively as RAID 0 with no redundancy. Significantly higher risk is introduced during a rebuild than RAID 1+0 as all the data from all the drives in the remaining stripe has to be read rather than just from one drive, increasing the chance of an unrecoverable read error (URE) and significantly extending the rebuild window. RAID 1+0: (see: RAID 10) creates a striped set from a series of mirrored drives. The array can sustain multiple drive losses so long as no mirror loses all its drives. JBOD RAID N+N: With JBOD (just a bunch of disks), it is possible to concatenate disks, but also volumes such as RAID sets. With larger drive capacities, write delay and rebuilding time increase dramatically (especially, as described above, with RAID 5 and RAID 6). By splitting a larger RAID N set into smaller subsets and concatenating them with linear JBOD, write and rebuilding time will be reduced. If a hardware RAID controller is not capable of nesting linear JBOD with RAID N, then linear JBOD can be achieved with OS-level software RAID in combination with separate RAID N subset volumes created within one, or more, hardware RAID controller(s). Besides a drastic speed increase, this also provides a substantial advantage: the possibility to start a linear JBOD with a small set of disks and to be able to expand the total set with disks of different size, later on (in time, disks of bigger size become available on the market). There is another advantage in the form of disaster recovery (if a RAID N subset happens to fail, then the data on the other RAID N subsets is not lost, reducing restore time). Non-standard levels Many configurations other than the basic numbered RAID levels are possible, and many companies, organizations, and groups have created their own non-standard configurations, in many cases designed to meet the specialized needs of a small niche group. Such configurations include the following: Linux MD RAID 10 provides a general RAID driver that in its "near" layout defaults to a standard RAID 1 with two drives, and a standard RAID 1+0 with four drives; however, it can include any number of drives, including odd numbers. With its "far" layout, MD RAID 10 can run both striped and mirrored, even with only two drives in f2 layout; this runs mirroring with striped reads, giving the read performance of RAID 0. Regular RAID 1, as provided by Linux software RAID, does not stripe reads, but can perform reads in parallel. Hadoop has a RAID system that generates a parity file by xor-ing a stripe of blocks in a single HDFS file. BeeGFS, the parallel file system, has internal striping (comparable to file-based RAID0) and replication (comparable to file-based RAID10) options to aggregate throughput and capacity of multiple servers and is typically based on top of an underlying RAID to make disk failures transparent. Declustered RAID scatters dual (or more) copies of the data across all disks (possibly hundreds) in a storage subsystem, while holding back enough spare capacity to allow for a few disks to fail. The scattering is based on algorithms which give the appearance of arbitrariness. When one or more disks fail the missing copies are rebuilt into that spare capacity, again arbitrarily. Because the rebuild is done from and to all the remaining disks, it operates much faster than with traditional RAID, reducing the overall impact on clients of the storage system. Implementations The distribution of data across multiple drives can be managed either by dedicated computer hardware or by software. A software solution may be part of the operating system, part of the firmware and drivers supplied with a standard drive controller (so-called "hardware-assisted software RAID"), or it may reside entirely within the hardware RAID controller. Hardware-based Configuration of hardware RAID Hardware RAID controllers can be configured through card BIOS or Option ROM before an operating system is booted, and after the operating system is booted, proprietary configuration utilities are available from the manufacturer of each controller. Unlike the network interface controllers for Ethernet, which can usually be configured and serviced entirely through the common operating system paradigms like ifconfig in Unix, without a need for any third-party tools, each manufacturer of each RAID controller usually provides their own proprietary software tooling for each operating system that they deem to support, ensuring a vendor lock-in, and contributing to reliability issues. For example, in FreeBSD, in order to access the configuration of Adaptec RAID controllers, users are required to enable Linux compatibility layer, and use the Linux tooling from Adaptec, potentially compromising the stability, reliability and security of their setup, especially when taking the long term view. Some other operating systems have implemented their own generic frameworks for interfacing with any RAID controller, and provide tools for monitoring RAID volume status, as well as facilitation of drive identification through LED blinking, alarm management and hot spare disk designations from within the operating system without having to reboot into card BIOS. For example, this was the approach taken by OpenBSD in 2005 with its bio(4) pseudo-device and the bioctl utility, which provide volume status, and allow LED/alarm/hotspare control, as well as the sensors (including the drive sensor) for health monitoring; this approach has subsequently been adopted and extended by NetBSD in 2007 as well. Software-based Software RAID implementations are provided by many modern operating systems. Software RAID can be implemented as: A layer that abstracts multiple devices, thereby providing a single virtual device (such as Linux kernel's md and OpenBSD's softraid) A more generic logical volume manager (provided with most server-class operating systems such as Veritas or LVM) A component of the file system (such as ZFS, Spectrum Scale or Btrfs) A layer that sits above any file system and provides parity protection to user data (such as RAID-F) Some advanced file systems are designed to organize data across multiple storage devices directly, without needing the help of a third-party logical volume manager: ZFS supports the equivalents of RAID 0, RAID 1, RAID 5 (RAID-Z1) single-parity, RAID 6 (RAID-Z2) double-parity, and a triple-parity version (RAID-Z3) also referred to as RAID 7. As it always stripes over top-level vdevs, it supports equivalents of the 1+0, 5+0, and 6+0 nested RAID levels (as well as striped triple-parity sets) but not other nested combinations. ZFS is the native file system on Solaris and illumos, and is also available on FreeBSD and Linux. Open-source ZFS implementations are actively developed under the OpenZFS umbrella project. Spectrum Scale, initially developed by IBM for media streaming and scalable analytics, supports declustered RAID protection schemes up to n+3. A particularity is the dynamic rebuilding priority which runs with low impact in the background until a data chunk hits n+0 redundancy, in which case this chunk is quickly rebuilt to at least n+1. On top, Spectrum Scale supports metro-distance RAID 1. Btrfs supports RAID 0, RAID 1 and RAID 10 (RAID 5 and 6 are under development). XFS was originally designed to provide an integrated volume manager that supports concatenating, mirroring and striping of multiple physical storage devices. However, the implementation of XFS in Linux kernel lacks the integrated volume manager. Many operating systems provide RAID implementations, including the following: Hewlett-Packard's OpenVMS operating system supports RAID 1. The mirrored disks, called a "shadow set", can be in different locations to assist in disaster recovery. Apple's macOS and macOS Server support RAID 0, RAID 1, and RAID 1+0. FreeBSD supports RAID 0, RAID 1, RAID 3, and RAID 5, and all nestings via GEOM modules and ccd. Linux's md supports RAID 0, RAID 1, RAID 4, RAID 5, RAID 6, and all nestings. Certain reshaping/resizing/expanding operations are also supported. Microsoft Windows supports RAID 0, RAID 1, and RAID 5 using various software implementations. Logical Disk Manager, introduced with Windows 2000, allows for the creation of RAID 0, RAID 1, and RAID 5 volumes by using dynamic disks, but this was limited only to professional and server editions of Windows until the release of Windows 8. Windows XP can be modified to unlock support for RAID 0, 1, and 5. Windows 8 and Windows Server 2012 introduced a RAID-like feature known as Storage Spaces, which also allows users to specify mirroring, parity, or no redundancy on a folder-by-folder basis. These options are similar to RAID 1 and RAID 5, but are implemented at a higher abstraction level. NetBSD supports RAID 0, 1, 4, and 5 via its software implementation, named RAIDframe. OpenBSD supports RAID 0, 1 and 5 via its software implementation, named softraid. If a boot drive fails, the system has to be sophisticated enough to be able to boot from the remaining drive or drives. For instance, consider a computer whose disk is configured as RAID 1 (mirrored drives); if the first drive in the array fails, then a first-stage boot loader might not be sophisticated enough to attempt loading the second-stage boot loader from the second drive as a fallback. The second-stage boot loader for FreeBSD is capable of loading a kernel from such an array. Firmware- and driver-based Software-implemented RAID is not always compatible with the system's boot process, and it is generally impractical for desktop versions of Windows. However, hardware RAID controllers are expensive and proprietary. To fill this gap, inexpensive "RAID controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip with proprietary firmware and drivers. During early bootup, the RAID is implemented by the firmware and, once the operating system has been more completely loaded, the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system. An example is Intel Rapid Storage Technology, implemented on many consumer-level motherboards. Because some minimal hardware support is involved, this implementation is also called "hardware-assisted software RAID", "hybrid model" RAID, or even "fake RAID". If RAID 5 is supported, the hardware may provide a hardware XOR accelerator. An advantage of this model over the pure software RAID is that—if using a redundancy mode—the boot drive is protected from failure (due to the firmware) during the boot process even before the operating system's drivers take over. Integrity Data scrubbing (referred to in some environments as patrol read) involves periodic reading and checking by the RAID controller of all the blocks in an array, including those not otherwise accessed. This detects bad blocks before use. Data scrubbing checks for bad blocks on each storage device in an array, but also uses the redundancy of the array to recover bad blocks on a single drive and to reassign the recovered data to spare blocks elsewhere on the drive. Frequently, a RAID controller is configured to "drop" a component drive (that is, to assume a component drive has failed) if the drive has been unresponsive for eight seconds or so; this might cause the array controller to drop a good drive because that drive has not been given enough time to complete its internal error recovery procedure. Consequently, using consumer-marketed drives with RAID can be risky, and so-called "enterprise class" drives limit this error recovery time to reduce risk. Western Digital's desktop drives used to have a specific fix. A utility called WDTLER.exe limited a drive's error recovery time. The utility enabled TLER (time limited error recovery), which limits the error recovery time to seven seconds. Around September 2009, Western Digital disabled this feature in their desktop drives (such as the Caviar Black line), making such drives unsuitable for use in RAID configurations. However, Western Digital enterprise class drives are shipped from the factory with TLER enabled. Similar technologies are used by Seagate, Samsung, and Hitachi. For non-RAID usage, an enterprise class drive with a short error recovery timeout that cannot be changed is therefore less suitable than a desktop drive. In late 2010, the Smartmontools program began supporting the configuration of ATA Error Recovery Control, allowing the tool to configure many desktop class hard drives for use in RAID setups. While RAID may protect against physical drive failure, the data is still exposed to operator, software, hardware, and virus destruction. Many studies cite operator fault as a common source of malfunction, such as a server operator replacing the incorrect drive in a faulty RAID, and disabling the system (even temporarily) in the process. An array can be overwhelmed by catastrophic failure that exceeds its recovery capacity and the entire array is at risk of physical damage by fire, natural disaster, and human forces, however backups can be stored off site. An array is also vulnerable to controller failure because it is not always possible to migrate it to a new, different controller without data loss. Weaknesses Correlated failures In practice, the drives are often the same age (with similar wear) and subject to the same environment. Since many drive failures are due to mechanical issues (which are more likely on older drives), this violates the assumptions of independent, identical rate of failure amongst drives; failures are in fact statistically correlated. In practice, the chances for a second failure before the first has been recovered (causing data loss) are higher than the chances for random failures. In a study of about 100,000 drives, the probability of two drives in the same cluster failing within one hour was four times larger than predicted by the exponential statistical distribution—which characterizes processes in which events occur continuously and independently at a constant average rate. The probability of two failures in the same 10-hour period was twice as large as predicted by an exponential distribution. Unrecoverable read errors during rebuild Unrecoverable read errors (URE) present as sector read failures, also known as latent sector errors (LSE). The associated media assessment measure, unrecoverable bit error (UBE) rate, is typically guaranteed to be less than one bit in 1015 for enterprise-class drives (SCSI, FC, SAS or SATA), and less than one bit in 1014 for desktop-class drives (IDE/ATA/PATA or SATA). Increasing drive capacities and large RAID 5 instances have led to the maximum error rates being insufficient to guarantee a successful recovery, due to the high likelihood of such an error occurring on one or more remaining drives during a RAID set rebuild. When rebuilding, parity-based schemes such as RAID 5 are particularly prone to the effects of UREs as they affect not only the sector where they occur, but also reconstructed blocks using that sector for parity computation. Double-protection parity-based schemes, such as RAID 6, attempt to address this issue by providing redundancy that allows double-drive failures; as a downside, such schemes suffer from elevated write penalty—the number of times the storage medium must be accessed during a single write operation. Schemes that duplicate (mirror) data in a drive-to-drive manner, such as RAID 1 and RAID 10, have a lower risk from UREs than those using parity computation or mirroring between striped sets. Data scrubbing, as a background process, can be used to detect and recover from UREs, effectively reducing the risk of them happening during RAID rebuilds and causing double-drive failures. The recovery of UREs involves remapping of affected underlying disk sectors, utilizing the drive's sector remapping pool; in case of UREs detected during background scrubbing, data redundancy provided by a fully operational RAID set allows the missing data to be reconstructed and rewritten to a remapped sector. Increasing rebuild time and failure probability Drive capacity has grown at a much faster rate than transfer speed, and error rates have only fallen a little in comparison. Therefore, larger-capacity drives may take hours if not days to rebuild, during which time other drives may fail or yet undetected read errors may surface. The rebuild time is also limited if the entire array is still in operation at reduced capacity. Given an array with only one redundant drive (which applies to RAID levels 3, 4 and 5, and to "classic" two-drive RAID 1), a second drive failure would cause complete failure of the array. Even though individual drives' mean time between failure (MTBF) have increased over time, this increase has not kept pace with the increased storage capacity of the drives. The time to rebuild the array after a single drive failure, as well as the chance of a second failure during a rebuild, have increased over time. Some commentators have declared that RAID 6 is only a "band aid" in this respect, because it only kicks the problem a little further down the road. However, according to the 2006 NetApp study of Berriman et al., the chance of failure decreases by a factor of about 3,800 (relative to RAID 5) for a proper implementation of RAID 6, even when using commodity drives. Nevertheless, if the currently observed technology trends remain unchanged, in 2019 a RAID 6 array will have the same chance of failure as its RAID 5 counterpart had in 2010. Mirroring schemes such as RAID 10 have a bounded recovery time as they require the copy of a single failed drive, compared with parity schemes such as RAID 6, which require the copy of all blocks of the drives in an array set. Triple parity schemes, or triple mirroring, have been suggested as one approach to improve resilience to an additional drive failure during this large rebuild time. Atomicity A system crash or other interruption of a write operation can result in states where the parity is inconsistent with the data due to non-atomicity of the write process, such that the parity cannot be used for recovery in the case of a disk failure. This is commonly termed the RAID 5 write hole. The RAID write hole is a known data corruption issue in older and low-end RAIDs, caused by interrupted destaging of writes to disk. The write hole can be addressed with write-ahead logging. This was fixed in mdadm by introducing a dedicated journaling device (to avoid performance penalty, typically, SSDs and NVMs are preferred) for that purpose. This is a little understood and rarely mentioned failure mode for redundant storage systems that do not utilize transactional features. Database researcher Jim Gray wrote "Update in Place is a Poison Apple" during the early days of relational database commercialization. Write-cache reliability There are concerns about write-cache reliability, specifically regarding devices equipped with a write-back cache, which is a caching system that reports the data as written as soon as it is written to cache, as opposed to when it is written to the non-volatile medium. If the system experiences a power loss or other major failure, the data may be irrevocably lost from the cache before reaching the non-volatile storage. For this reason good write-back cache implementations include mechanisms, such as redundant battery power, to preserve cache contents across system failures (including power failures) and to flush the cache at system restart time. See also Network-attached storage (NAS) Non-RAID drive architectures Redundant array of independent memory S.M.A.R.T. References External links "Empirical Measurements of Disk Failure Rates and Error Rates", by Jim Gray and Catharine van Ingen, December 2005 The Mathematics of RAID-6, by H. Peter Anvin Does Fake RAID Offer Any Advantage Over Software RAID? – Discussion on superuser.com Comparing RAID Implementation Methods – Dell.com BAARF: Battle Against Any Raid Five (RAID 3, 4 and 5 versus RAID 10) A Clean-Slate Look at Disk Scrubbing
4464839
https://en.wikipedia.org/wiki/Intego
Intego
Intego is a Mac and Windows security software company founded in 1997 by Jean-Paul Florencio and Laurent Marteau. The company creates Internet security software for Mac OS X and macOS and Windows, including: antivirus, firewall, anti-spam, backup software and data protection software. Intego currently has offices in the U.S. in Seattle, Washington, and Austin, Texas, and international offices in Paris, France, and Nagano, Japan. All of Intego's products are universal binaries, and are supported in several languages: English, French, Japanese, German, Spanish and Italian. History Co-founded by former CEO Laurent Marteau and Jean-Paul Florencio and based in Paris, France, Intego released its first antivirus product in 1997: Rival, an antivirus for Mac OS 8. Two years later in July 1999, Intego released NetBarrier, the first personal security software suite for Mac OS 8. Then in October 2000, Intego released its legacy antivirus software, VirusBarrier 1.0, for Mac OS 8 and Mac OS 9. Intego launched The Mac Security Blog, a blog that covers Mac security news, Apple security updates, Mac malware alerts, as well as news and opinion pieces related to Apple products, in mid-2007. The company launched a podcast in October 2017, called the Intego Mac Podcast. Intego released its current X9 version of antivirus and security software in June 2016. In July 2018, Kape Technologies announced the acquisition of Intego to "enhance [their] arsenal of products in cyber protection for iOS." In September 2020, Intego also released a Windows version of the security and antivirus suite. Products VirusBarrier FileGuard Personal Antispam Personal Backup Internet Security Barrier ContentBarrier Remote Management Console See also Antivirus Internet security Comparison of antivirus software Comparison of computer viruses References External links Software companies based in Seattle Software companies of France Computer security software companies Companies established in 1997 Macintosh software companies Antivirus software Software companies of the United States 1997 establishments in the United States Software companies established in 1997
66279348
https://en.wikipedia.org/wiki/Sourcetrail
Sourcetrail
Sourcetrail is a free open-source source code explorer that provides interactive dependency graphs and support multiple programming languages including C, C++, Java and Python. History The project was started by Eberhard Gräther after an internship at Google where he worked on Google Chrome and consumed a lot of time (1 month) to implement a simple feature that he expected to be done in 1–2 hours. The complexity of understanding large code-bases by new developers was his motivation to develop a tool that help in understanding the consequences of source code modifications. The project started as a commercial project in 2016 under the name Coati. In November 2019, Sourcetrail was released as open-source software under the GNU General Public License v3.0. Concept Most of a programmer's time is invested in reading the source code. Therefore, the goal of Sourcetrail is to help the developers to understand the source code and the relationship between different components. Sourcetrail builds a dependency graph after indexing the source code files and provides a graphical overview of the source code. It is built in an extendable way, so it could be extended to support more programming languages. See also Software visualization References External links Sourcetrail home page Sourcetrail source code (GitHub) Software Visualization software Static program analysis tools Software metrics Infographics Software maintenance Software development Software quality Source code
38309941
https://en.wikipedia.org/wiki/Satya%20Nadella
Satya Nadella
Satya Narayana Nadella (, ; born 19 August 1967) is an Indian-born American business executive. He is the executive chairman and CEO of Microsoft, succeeding Steve Ballmer in 2014 as CEO and John W. Thompson in 2021 as chairman. Before becoming CEO, he was the executive vice president of Microsoft's cloud and enterprise group, responsible for building and running the company's computing platforms. Early life Nadella was born in Hyderabad of present-day Telangana, India into a Telugu-speaking Hindu family. His mother Prabhavati was a Sanskrit lecturer and his father, Bukkapuram Nadella Yugandhar, was an Indian Administrative Service officer of the 1962 batch. Nadella attended the Hyderabad Public School, Begumpet before receiving a bachelor's in electrical engineering from the Manipal Institute of Technology in Karnataka in 1988. Nadella then traveled to the U.S. to study for an M.S. in computer science at the University of Wisconsin–Milwaukee, receiving his degree in 1990. Later, he received an MBA from the University of Chicago Booth School of Business in 1997. Career Sun Microsystems Nadella worked at Sun Microsystems as a member of its technology staff before joining Microsoft in 1992. Microsoft At Microsoft, Nadella has led major projects that included the company's move to cloud computing and the development of one of the largest cloud infrastructures in the world. Nadella worked as the senior vice-president of research and development (R&D) for the Online Services Division and vice-president of the Microsoft Business Division. Later, he was made the president of Microsoft's $19 billion Server and Tools Business and led a transformation of the company's business and technology culture from client services to cloud infrastructure and services. He has been credited for helping bring Microsoft's database, Windows Server and developer tools to its Azure cloud. The revenue from Cloud Services grew to $20.3 billion in June 2013 from $16.6 billion when he took over in 2011. He received $84.5 million in 2016 pay. In 2013, Nadella's base salary was reportedly $669,167. Including stock bonuses, the total compensation stood around $7.6 million. Previous positions held by Nadella include: President of the Server & Tools Division (9 February 2011 – February 2014) Senior Vice-president of Research and Development for the Online Services Division (March 2007 – February 2011) Vice-president of the Business Division Corporate Vice-president of Business Solutions and Search & Advertising Platform Group Executive Vice-president of Cloud and Enterprise group On 4 February 2014, Nadella was announced as the new CEO of Microsoft, the third CEO in the company's history, following Bill Gates and Steve Ballmer. In October 2014, Nadella attended an event on Women in Computing and courted controversy after he made a statement that women should not ask for a raise and should trust the system. Nadella was criticised for the statement and he later apologized on Twitter. He then sent an email to Microsoft employees admitting he was "Completely wrong." Nadella's tenure at Microsoft has emphasized working with companies and technologies with which Microsoft also competes, including Apple Inc., Salesforce, IBM, and Dropbox. In contrast to previous Microsoft campaigns against the Linux operating system, Nadella proclaimed that "Microsoft ❤️ Linux", and Microsoft joined the Linux Foundation as a Platinum member in 2016. Under Nadella, Microsoft revised its mission statement to "empower every person and every organization on the planet to achieve more". He orchestrated a cultural shift at Microsoft by emphasizing empathy, collaboration, and 'growth mindset'. He has transformed Microsoft's corporate culture into one that emphasizes continual learning and growth. In 2014, Nadella's first acquisition with Microsoft's was of Mojang, a Swedish game company best known for the computer game Minecraft, for $2.5 billion. He followed that by purchasing Xamarin for an undisclosed amount. He oversaw the purchase of professional network LinkedIn in 2016 for $26.2 billion. On October 26, 2018, Microsoft acquired GitHub for US$7.5 billion. Since Nadella became CEO, Microsoft stock had tripled by September 2018, with a 27% annual growth rate. Boards and committees Board of Directors, Starbucks Board of Trustees, Fred Hutchinson Cancer Research Center Board of Trustees, University of Chicago Awards and recognition In 2018, he was a Time 100 honoree. In 2019, Nadella was named Financial Times Person of the Year and Fortune magazine Businessperson of the Year. In 2020, Nadella was recognized as Global Indian Business Icon at CNBC-TV18's India Business Leader Awards in Mumbai. In 2022, Nadella was awarded Padma Bhushan, the third highest civilian award in India by the Government of India. Personal life In 1992, Nadella married Anupama, the daughter of his father's IAS batchmate. She was his junior at Manipal pursuing a B.Arch in the Faculty of Architecture. The couple have three children, a son and two daughters, and live in Clyde Hill and Bellevue, Washington. His son Zain was a legally blind quadriplegic with cerebral palsy. Zain died in February 2022, at the age of 26. Nadella is an avid reader of American and Indian poetry. He also nurses a passion for cricket, having played on his school team. Nadella and his wife Anupama are part of the ownership group of Seattle Sounders FC, a Major League Soccer club. Nadella has authored a book titled Hit Refresh that explores his life, his career in Microsoft and how he believes technology will shape the future. He announced that the profits from the book would go to Microsoft Philanthropies and through that to nonprofit organizations. Publications Hit Refresh: The quest to rediscover Microsoft's soul and imagine a better future for everyone, 2017. (audiobook ) References External links Microsoft bio Forbes Profile Microsoft employees 1967 births Living people Telugu people Engineers from Andhra Pradesh Businesspeople in software University of Chicago alumni Manipal Academy of Higher Education alumni American chief executives Indian chief executives Indian emigrants to the United States University of Wisconsin–Milwaukee alumni Businesspeople from Hyderabad, India University of Chicago Booth School of Business alumni American technology chief executives 20th-century Indian engineers 21st-century Indian engineers People from Clyde Hill, Washington People from Bellevue, Washington Recipients of Pravasi Bharatiya Samman 20th-century American businesspeople 21st-century American businesspeople American businesspeople of Indian descent American computer businesspeople American chief executives of Fortune 500 companies Chief executives in the technology industry People from Andhra Pradesh People from Telangana People from Hyderabad, India American people of Telugu descent Recipients of the Padma Bhushan in trade and industry
69269617
https://en.wikipedia.org/wiki/UKUI%20%28desktop%20environment%29
UKUI (desktop environment)
UKUI is a desktop environment for Linux distributions and other UNIX-like operating systems, originally developed for Ubuntu Kylin Written using Qt tools UKUI (Ubuntu Kylin User Interface) is a desktop environment initially developed to work on Ubuntu Kylin, a Linux distribution that is one of Ubuntu's official flavors. UKUI is a fork of the Mate Desktop Environment. It is a lightweight desktop environment, which consumes few resources and works with older computers. It has been developed with GTK and Qt technologies. Its visual appearance is similar to Windows 7, making it easier for new users of GNU/Linux. Gallery References Free desktop environments Ubuntu derivatives
22441046
https://en.wikipedia.org/wiki/Concrete%20CMS
Concrete CMS
Concrete CMS (formerly concrete5) is an open-source content management system (CMS) for publishing content on the World Wide Web and intranets. Concrete CMS is designed for ease of use, for users with a minimum of technical skills. It enables users to edit site content directly from the page. It provides version management for every page, similar to wiki software, another type of web site development software. Concrete CMS allows users to edit images through an embedded editor on the page. As of 2021, there are over 62,000 live websites that are built with Concrete CMS. Code Concrete CMS code is based on model–view–controller architecture and object-oriented programming. Some core features are: integrated server caching, developer API, version tracking system and search engine optimization. Features Concrete CMS features in-context editing (the ability to edit website content directly on the page, rather than in an administrative interface or using web editor software). Editable areas are defined in Concrete templates which allow editors to insert blocks of content. These can contain simple content (text and images) or have more complex functionality, for example image slideshows, comments systems, lists of files, and maps. Other addons can be installed from the Concrete Marketplace to extend the range of blocks available for insertion. Websites running Concrete can be connected to the Concrete repository, allowing automatic upgrading of the core software and of any addons downloaded or purchased from the Marketplace. Licensing Concrete CMS is free and open-source (FOSS) under the MIT software license. History Development of Concrete CMS began in 2003 as a rapid-design approach to building the now-defunct LewisAndClark200.org, the official site for the Ad Council's National Council for the Lewis & Clark Bicentennial. The version 7 release of concrete was incompatible with prior versions. All prior versions became known as the legacy branch, with the final version being 5.6.4.0 before reaching end of life status on 24 August 2019. Modern Concrete (version 7 and above) features a new user interface, a user-accessible data model called Express, an extensive permissions model, and in-context WYSIWYG editing. Regular updates and security patches have been released. Awards and recognition SourceForge's "Project of the Month" October 2008. In the 2010 Open Source CMS Market Share Report, Concrete had: Growth of developer share: the largest growth of any system in the Developer Support metric Installations: Though Concrete5 was only #12 in weekly downloads, they showed the most year-on-year improvement of any system in the survey - up 517%. The system also came in at #3 in total installations, as per the survey—up from #10 last year. Installation data from the survey was also validated by the BuiltWith data which showed Concrete5 in 7th position Third Party Support: Concrete5 showed a second year of strong growth in the Developer Support metric. Search Engine Visibility: The project site showed the largest gain of any system in the survey for the second year running. Project Site Popularity: Concrete5 showed the second largest increase in Alexa Internet rank. Mindshare: Concrete5 placed last in Brand Familiarity in 2009; in 2010, the system came in 6th. Reputation: Concrete5 led the group in brand sentiment, abandonment and product preference, and came in second in conversion rate. In the 2011 Open Source CMS Market Share Report, though Concrete5 still lagged Drupal, Joomla and WordPress in terms of total market share, it had the highest growth. As of 1 Jan 2015, Concrete5 was ranked 33rd overall by W3Techs.com, representing 0.2% of the CMS marketplace including open-source and commercial offerings. As of Jan. 21, 2016, Concrete5 Picked for U.S. Army MWR Unified Web Presence As of Nov. 3, 2021, Concrete CMS was listed as the 15th most popular on the Entire Internet in Open Source with 60,236 live websites and 55,855 domains redirecting to those sites. See also List of content management systems References External links Documentation Free content management systems Free software programmed in PHP Blog software Software using the MIT license
6134758
https://en.wikipedia.org/wiki/Ardence
Ardence
Ardence, a Citrix Company, was a software company headquartered in Waltham, Massachusetts with representatives in Washington, D.C.; Virginia Beach, VA; Chicago, IL; Denton, TX; and in Europe, the Middle East, Africa and India. It developed a software-streaming product and an embedded OEM development platform. It was founded in 1980 as VenturCom. On December 20, 2006, Citrix Systems Inc. announced an agreement to acquire Ardence. In 2008, some former Ardence executives acquired the Ardence programs from Citrix and formed IntervalZero. History VenturCom was founded in 1980, by Marc H. Meyer, Doug Mook, Bill Spencer and Myron Zimmerman. The company changed its name to Ardence in 2004. On December 20, 2006, Citrix Systems Inc. announced an agreement to acquire Ardence. In 2008, a group of former Ardence executives founded IntervalZero and acquired the Ardence embedded software business from Citrix. Citrix retained a minority ownership the firm. Products The enterprise software-streaming product deployed Microsoft Windows, SUSE Linux, Red Hat Linux and Turbolinux operating systems, along with all their applications, on demand from networked storage. It allowed any x86-based computer - desktop, server, or device - to be provisioned, or re-provisioned from bare metal. The core technology behind the software streaming product was a device driver for the selected operating system, which mounts a virtual disk over a custom UDP protocol. Basically, computers were configured to netboot a kernel that contained this device driver. Awards In 2005, Ardence won the ComputerWorld Horizon Award. In 2006, Ardence won the CRN Best In Show Award at IBM PartnerWorld. See also Venix References External links Research report on Ardence by infoworld Software-Streaming in Education Computer World Article on Software-Streaming Citrix Systems Software companies based in Massachusetts Software companies established in 1980 Defunct software companies of the United States 2006 mergers and acquisitions Software companies disestablished in 2008 2008 disestablishments in Massachusetts 1980 establishments in Massachusetts
31509116
https://en.wikipedia.org/wiki/AllClear%20ID
AllClear ID
AllClear ID (aka AllClear and formerly Debix) provides products and services that help protect people and their personal information from threats related to identity theft. AllClear ID is led by technology and customer service teams who advise companies on data breach response and support consumers who have become victims of identity theft. Data breach response services The breach response services from AllClear ID include notification, call center & customer support, and identity protection products. Notification provides access to identity protection services. The call center provides a team experienced in managing the anxiety of breach victims to answer questions about the incident, reassure individuals, and explain the identity protection services offered. Products are available to mitigate risk from different types of breaches including compromised credit cards, passwords, health information, and Social Security numbers. AllClear ID has worked with large companies to manage sensitive and highly-visible breach responses including The Home Depot, P.F. Chang's, Michael's/Aaron Brothers, The UPS Store, Dairy Queen, Albertson's/SuperValu, and Anthem BCBS. Consumer identity protection products AllClear ID offers multiple products for various protection needs. AllClear Pro includes a $1 million ID Theft insurance policy, secure phone alerts, and identity theft and credit monitoring. AllClear Pro was ranked second overall in Javelin's Fifth Annual ID Protection Services Scorecard, with Identity Guard placing first. Child identity theft research In April 2011 AllClear ID released a report with Richard Power, a distinguished fellow at Carnegie Mellon University CyLab, on the prevalence of child ID theft. Using the data supplied by AllClear ID, Power completed the largest report ever done on child identity theft. From the database of over 40,000 children, Power found that 4,311 had someone else using their Social Security numbers. The Today Show led a follow-up investigation, interviewing victims of child identity theft. Investigators found the thieves who were still living and working using a child's Social Security number. In July 2011, CEO Bo Holland, along with leaders from the Social Security Administration, Identity Theft 911, The Identity Theft Resource Center, and more, spoke at Stolen Futures, the FTC forum on Child Identity Theft. There he presented the findings from the CyLab report on child identity theft, as well as findings from follow up data sampling since the report release. In May 2012, AllClear ID released a follow-up report on child ID theft data involving 27,000 minors. This report further confirmed the growing problem of child identity theft, indicating that children were targeted at a rate 35 times greater than that of adults OnCall Credit Monitoring Debix's OnCall Credit Monitoring uses patented technology to offer credit report monitoring to its customers. This means that whenever someone applies for credit in a customer's name, a confirmation call is placed. Once called, the Debix customer can either confirm the credit request or report it as fraud. If fraud is reported, the customer is connected to a fraud investigator. Consumers can use credit monitoring services to monitor credit history including new accounts and inquires into opening new accounts, to verify whether the transactions are fraudulent or not. Awards and recognition 2010 - Debix was recognized as an AlwaysOn Global 250 winner "signifying leadership amongst its peers and game-changing approaches and technologies that are likely to disrupt existing markets and entrenched players in the Global Silicon Valley". August 2011 - Awarded "Best in Resolution" by Javelin Strategy & Research February 2012 - Awarded 5 Stevie Awards for Sales & Customer Service: Customer Service Department of the Year, Contact Center of the Year, Best Use of Technology in Customer Service, Front-Line Customer Service Professional of the Year (Investigator Christy McCarley), Customer Service Leader of the Year (VP of Customer Services & Chief Investigator Jamie May) February 2013 - Awarded 5 Stevie Awards for Customer Service: Contact Center of the Year, Best Use of Technology in Customer Service, Front-Line Customer Service Professional of the Year, Contact Center Manager of the Year, and Customer Service Department of the Year. February 2014 - Awarded 5 Stevie Awards for Customer Service: Young Customer Service Professional of the Year, Customer Service Department of the Year, Innovation in Customer Service, Contact Center of the Year, Customer Service Professional of the Year. In a recent study, AllClear ID Pro was ranked second overall, with Identity Guard placing first. In the category of Restoration, AllClear ID tied for first alongside Identity Force and Royal. History 2004: Founded by Bo Holland, originally named Debix, Inc. After working in the financial industry, Holland used his knowledge of how banks and institutions handled credit requests to create Debix's identity protection network. Holland was previously founder and CEO of Works, Inc., which was acquired by Bank of America in 2005. Works is an electronic payment solutions provider, and Holland invented the patent-pending technology that enables large organizations to approve and control payments for operating expenses via credit cards. April 2011: Carnegie Mellon CyLab and AllClear ID released "Child Identity Theft" research reporting that child identity theft is a faster-growing crime than adult identity theft April 2011: Debix introduced AllClear ID, the first free identity theft protection service for families. AllClear ID offers a free service which monitors data for stolen personal information and provides free identity repair in addition to a premium product. May 2011: Partnered with Sony for PlayStation Network outage in April In July 2011, Debix was granted U.S. Patent No. 7,983,979 for multi-band, multi-factor authentication design. July 2011: Bo Holland presents Child Identity Theft research to Federal Trade Commission March 2012: Debix company name changed to AllClear ID, Inc. May 2012: Released "Child Identity Theft" research reporting "Criminals are targeting the youngest children. 15% of victims were five years old and younger, an increase of 105% over the 2011 findings" August 2014: AllClear ID Plus offered to victims of the Home Depot Credit Card breach of 2014. February 2015: AllClear ID Secure and Pro offered to victims of the Anthem Inc. data breach of 2015. January 2018: AllClear ID offered to victims of the Guaranteed Rate Data Security Breach of September 14, 2017. April 2018: AllClear ID offered to Delta Air Lines victims of the [24]7.ai data breach in September–October 2017. April 2018: Massachusetts State Tax Department/ Child Support Division exposed private data of 6,100 people due to an apparent coding error in the COMETS HD system by the company Accenture. The software vendor covers all cost of AllClear ID to the affected people for 24 months. References Internet fraud Identity theft Companies based in Austin, Texas Companies established in 2004
16983639
https://en.wikipedia.org/wiki/NebuAd
NebuAd
NebuAd was an American online advertising company based in Redwood City, California, with offices in New York and London and was funded by the investment companies Sierra Ventures and Menlo Ventures. It was one of several companies which originally developed behavioral targeting advertising systems, and sought deals with ISPs to enable them to analyse customer's websurfing habits in order to provide them with more relevant, micro-targeted advertising. Phorm was a similar company operating out of Europe. Adzilla and Project Rialto also appear to be developing similar systems. At one point, NebuAd had signed up more than 30 customers, mostly Internet access providers, its agreements with providers covered 10 percent of the broadband users in America. Due to fallout following public and Congressional concern, NebuAd's largest ISP customers pulled out. NebuAd closed for business in the UK in August 2008, followed by the US in May 2009. NebuAd UK Ltd was dissolved in February 2010. Overview NebuAd's platform comprised three main parts: hardware, hosted within an ISP, capable of inserting content into pages, an off-site server complex to analyse and categorise the contents of users' Internet communications, and relationships with advertising networks willing to present NebuAd's targeted advertising. The system consisted of hardware device installed within an ISP client network. Each device was capable of monitoring up to 50,000 users. Users could "opt-out" of NebuAd’s information collection and targeted ads, but there was no way for users to prevent ISPs from sending the data to NebuAd in the first place. Since ISPs route customers' traffic, it is an important vantage point from which to monitor all traffic to-and-from a consumer using Deep packet inspection (DPI). By analysing the traffic, NebuAd reported it gained more information about a customers' particular interests, than less intrusive methods. NebuAd's privacy policy claimed they "specifically not store or use any information relating to confidential medical information, racial or ethnic origins, religious beliefs, or sexuality, which are tied to personally identifiable information ('sensitive personal information')." It also advises, "The information we collect is stored and processed on NebuAd's servers in the United States. As a result, that information may be subject to access requests by governments, courts or law enforcement." At least 2 customers of a middle America ISP, WOW! noticed unexpected cookies appearing for sites such as nebuad.adjuggler.com, after using Google, which were being read and written, but when WOW's support department was contacted, WOW initially denied responsibility for the activity. After noticing problems with Google loading slowly, and the creation of these non-Google cookies, one customer spent hours trying to disinfect his machine, as he incorrectly thought it had been infected with spyware, but, when this proved ineffective, he resorted to reinstalling his machine's OS from scratch, only to discover the problem did not go away. On July 9, 2008 WOW suspended the use of NebuAd services to its subscribers. According to NebuAd's sales, less than 1% of users opt-out. One ISP expected to earn at least $2.50 per month for each user. NebuAd bought impressions from ad networks including Valueclick. NebuAd argued that behavioral targeting enriches the Internet on several fronts. Firstly, website owners are offered an improved click-through rate (CTR), which could increase profits, or reduce the amount of page-space dedicated to advertising. Owners of previously thought ad-unfriendly websites were offered a chance to make money not on the subject matter of their website, but on the interests of their visitors. Advertisers were offered better targeted adverts, hence reducing the "scattergun approach" (publishing as many ads as possible, in the hope of catching a client) and users were offered more relevant adverts. ISPs were paid for allowing NebuAd access to their network on a per-user per-active profile basis. NebuAd used data such as Web search terms, page views, page and ad clicks, time spent on specific sites, zip code, browser info and connection speed to categorise a user's interests. NebuAd did not have access to user identification information from the ISP, but may have been able to discover this through traffic monitoring (for example, email traffic may tie an email address to an ip address). Bob Dykes, the NebuAd CEO claimed in 2008; "We have 800 [consumer interest segments] today and we're expanding that to multiple thousands". Controversies Generally, NebuAd provided an additional revenue to network operators, which may maintain or lower consumers' Internet access bills. Critics of DPI and targeted advertising believe the raw content of their internet communications are entrusted to the ISP for handling without being inspected, or modified, nor for sale. Privacy advocates criticize the lack of disclosure which some ISPs provided, prior to partnering with NebuAd, was a weak opt-out method, the lack of oversight over what any third-party company does with the contents of Internet communications, its conflicts with United States wiretap laws, and the company's refusal to name its partner ISPs. Consumer notification In February 2008, one American cable operator, Wide Open West (WOW) started rolling out NebuAd. The roll-out was completed in the first week of March 2008. WOW updated its terms and conditions to include a mention of NebuAd, and in some cases informed customers of the terms having been updated. However, customers were not explicitly notified about NebuAd until later, sometime after the third week of March 2008. In response to an inquiry from members of the United States House of Representatives Telecommunications Subcommittee about its pilot test of NebuAd's services, Embarq said it had notified consumers by revising its privacy policy 2 weeks prior to sending its users' data streams to NebuAd. A Knology user in Knoxville, Tennessee reported she was not notified her Internet use was being monitored. In May 2008, Charter Communications announced it planned to monitor websites visited by its customers via a partnership with NebuAd. But after customers voiced their concerns, Charter changed its mind in June. Friction between ISP staff and management Plans to implement NebuAd did not agree with some ISP's employees, including one employee was planned to re-route his traffic to avoid NebuAd's Deep Packet Inspection hardware, altogether. Opt out vs. opt in Members of US Congress, Ed Markey, chairman of the House Subcommittee on Telecommunications and the Internet, and Joe Barton, a ranking member of the House Committee on Energy and Commerce, have argued that such services must be opt-in only to comply with the provisions laid down by Section 631 of the US Communications Act, and they wrote to Charter to request them to suspend the test: "We respectfully request that you do not move forward on Charter Communications' proposed venture with NebuAd until we have an opportunity to discuss with you issues raised by this proposed venture." A writer for Wired News questioned whether Charter users could really opt out of being monitored or if they were able to opt out only of receiving targeted ads. The same writer has asked if it would breach anti-wiretapping laws. An engineer who examined the system confirmed there was no way to opt out of NebuAd's monitoring. All inbound and outbound information was intercepted and sent to NebuAd's offsite server to be processed. Even if a user had opted out of the service, it did not prevent the ISP from sending the data to NebuAd. Use of packet forgery and browser exploits A report by Robert M. Topolski, chief technology consultant of the Free Press and Public Knowledge, showed NebuAd's devices created cookies on end-users machines by injecting a specious packet into the end of the data stream returned in response to some web page requests submitted to search engines, including Google and Yahoo. The content of this specious packet, which would be added to the end of the web page when it is rendered by the end-user's browser, contained HTML script tags which cause the browser to request Javascript from HugeDomains.com. Superimposing or adding advertising to webpages Critics were concerned that NebuAd superimposed its own advertising over the ads of other advertisers, or placing additional advertising to a page. These concerns originated o the NebuAd's "Fair Eagle" operation, patent application data which mentioned such inventions, and a loose relationship to Claria Corporation whose products and history suggest such tactics, as well as by the following: In 2007 it was reported that Redmoon, a Texas-based ISP was using a NebuAd technology to inject Redmoon's own advertising into pages visited by its users. The "Fair Eagle" advertisement hardware, provided by NebuAd, inserted additional advertising alongside the content of web pages. The ads featured a window with the "Fair Eagle" title bar. The injected ads stopped appearing toward the end of June, 2007. Relationship with Claria Corporation Some senior staff members of NebuAd had worked previously at a (now defunct) ad company, named Claria Corporation (formerly, the Gator Corporation), which was well known for ad software known as Gator. Both Claria and NebuAd were located in Redwood City, California. The June 2006 creation of nebuad.com coincides with timing of Claria's decision to shut down the Gator service. NebuAd repeatedly denied any corporate connection to Claria, describing its hiring of Claria employees as a result of that company shedding employees in a tight market for experienced advertising sales staff in the Valley. ISP partners ISPs that tried out or deployed or prepared to deploy Nebuad included the following: Broadstripe (formerly and formally Millennium Digital Media), Decaturnet Internet Services, Eastern Oregon Net, Inc. (EONI), High Speed Networks -E50 (HSNe50), Metro Provider, OnlyInternet.Net, Progressive Internet Services (Jayco.Net), RTC on Line (Rochester Telephone Company, Indiana), 20/20 Communications (2020comm.net) The following ISPs are listed in legal documents related to the class action notice (see below) as having deployed NebuAd hardware: AllCities Annapolis Wireless Internet AzulStar, Inc. Bresnan Communications, LLC Cable One, Inc. Casco Communications/Peak Internet Cavalier Broadband, LLC CenturyTel, Inc.; CenturyTel Broadband Services, LLC; CenturyTel Service Group, LLC CMS Internet LLC Eastern Oregon Network, Inc. Education Networks of America (ENA) Embarq Management Co.; United Telephone Co. of Eastern Kansas Fire2Wire Galaxy Internet Services Grande Communications High Speed Data Inc. 20/20 Communications iBahn General Holdings Knology, Inc. Mesa Networks, Inc. Millennium Digital Media Systems/Broadstripe Network Evolution, Inc. Nexicom Inc. Ricochet Networks, Inc. Rochester Telephone Company, Inc. Softcom Internet Communications United Online/NetZero Unplugged Cities WideOpenWest Finance, LLC (WOW) All ISPs ended or suspended their relationship with NebuAd. Charter Communications suspended its plans to test NebuAd following scrutiny from lawmakers and privacy groups. An Embarq spokesperson told the Associated Press that it ended its trial with NebuAd, and has not decided whether to move forward with Behavioral Targeting advertising "either through NebuAd or with any other vendor". CenturyTel, one of the earliest known ISPs to test NebuAd, notified customers in late May 2008 that it was deploying the hardware, only to pull out of the deal alongside of Charter a month later. Bresnan Communications used the NebuAd technology. Following the announcements by Charter, Embarq, and CenturyTel that they would no longer use NebuAd on their networks, Bresnan told a blogger that their NebuAd trial had ended and they would comply with whatever regulatory model emerges from the current debate. Web cache evidence indicated that Blackfoot Telecommunications Group, Inc. of Missoula, Montana appeared to have tried NebuAd between March and May 2008. Blackfoot's Mary Worden later explained, "Blackfoot tested NebuAd on its internal corporate network, with employees only and not with its customers, in March 2008, but had similar concerns to those raised by consumer groups and elected not to launch the service." Nexicom, serving Central Ontario and the Kawarthas, Canada, notified users via its Privacy Policy page that it was using NebuAd as of April 23, 2008. Following a question to users on a public forum, Nexicom's Paul Stewart replied, "Nexicom was investigating using the NebuAd service. The software was never implemented at any time as there were concerns on several levels regarding privacy issues. References to NebuAd in Nexicom's Privacy Policy has been removed." Wide Open West (WOW) completed suspension of NebuAd services on July 9. In a response to customer inquiries, WOW indicated, "With Congress in active review of online behavioral advertising, WOW! Internet- Cable- Phone is suspending its deployment of NebuAd services to our subscribers at this time. We believe that all parties are best served by a thoughtful and thorough review of this emerging advertising model, and we welcome the opportunity for that discussion to take place." Knology reported to the United States House Committee on Energy and Commerce that it discontinued a trial of NebuAd in all markets as of July 14, 2008. Unbeknownst to its users, Cable One conducted NebuAd tests on 14,000 customers in Alabama for six months beginning in November 2007. As of August 2008, Cable One had decided against using the technology "commercially" on its systems but in September said it was waiting for "clear rules and boundaries". Closure NebuAd was closed down in the UK in August 2008 and in the US in May 2009. Class-action lawsuit A proposed settlement for a class-action lawsuit against NebuAd was underway in October 2011. All subscribers to the ISPs listed above between January 1, 2007 and July 1, 2008 were to be considered mandatory class members and so did not have to opt in and could not choose to opt out. Under the terms of the proposed settlement, NebuAd would create a settlement fund of approximately $2,410,000, to be used for administration of the settlement, covering legal fees, an incentive award of $5,000 to the individual who brought the complaint, providing up to $1000 for other named representatives, with most of the money going to support non-profits providing consumer education and privacy research. References External links Valentine v NebuAd, Inc. Settlement Information Deep packet inspection Online advertising
28254652
https://en.wikipedia.org/wiki/Volantis
Volantis
Volantis was a mobile internet software company based in Guildford, England, now owned by Antenna Software, Inc. Volantis provides mobile applications and software solutions which give operators and enterprises access to a content delivery platform and a device database, which contained over 7,000 handsets as of February 2010. Volantis was part of the W3C's Mobile Web Initiative, a member of the Open Mobile Alliance and an advocate of Open Standards. History Volantis was founded by Jennifer Bursack, Martin Gaffney, Brett Nulf, and Mark Watson, who had all worked together at Tivoli Systems (a subsidiary of IBM) in the UK. In March 2000 the four founders resigned from IBM and approached investors under the name Unwired Ltd. The company name was changed, a few months later, to Volantis Systems Ltd. – the taking its name from the constellation Volans. The company's logo is a representation of the constellation. The initial idea, which the founders took to investors, involved "developing a yellow pages-like directory service for mobile phones". This was soon dropped in favor of creating "technologies to enable companies to build web sites for all sorts of devices—smart phones, kiosks, digital televisions, gaming consoles, and...mobile phones". The company received an initial investment of $3.2m from Kennet Partners in mid-2000. In 2001, the company received a round of funding led by Softbank Europe. The company grew quickly until the collapse of the Internet bubble and the events of 11 September 2001 forced Volantis to downsize and consolidate. Several significant telecommunications account signings (including Telefonica and Hutchison Whampoa) helped the company to recover and led it to change from offering "a multi-device service to a single device service", as "an enabler for mobile phone carriers." Another funding round, led by Accel Partners, was completed in 2002, and in July 2005, Volantis raised a further $7.5 million, bringing the total venture capital raised to $31.6 million. This achievement was recognised by Fast Track, who included the company in their Tech Track 100 (run in association with The Sunday Times) in 2006. Awards Successful fund-raising, a series of high-profile client wins and expansion into America, which "helped to boost sales 120% a year from £1.6m in 2003 to £7.9m in 2005" led to the company's inclusion in Fast Track's Tech Track 100 in 2006. In 2010, Volantis was selected as a Distinguished Honoree in the Telecoms category at the Stevie Awards for international business achievement in recognition of their mobile self-care solution. Open Source contributions Volantis released a Java-based open source framework for web applications building by mobile developers in early 2008. Acquisition Volantis was acquired by Antenna Software in February 2011; the financial details of the acquisition were not released. Antenna Software was, in turn, bought by Pegasystems in 2013. References External links Antenna Software, Inc. Companies based in Guildford Software companies of the United Kingdom
1081139
https://en.wikipedia.org/wiki/Nanosaur
Nanosaur
Nanosaur is a science fiction third-person shooter video game developed by Pangea Software and published by Ideas From the Deep for Mac OS 9 and Microsoft Windows. The player takes on the form of a Nanosaur, a genetically engineered intelligent dinosaur from the future, sent back in time just prior to the Cretaceous–Paleogene extinction event. Story In the distant year of 4122, a dinosaur species, Nanosaurs, rule the Earth. Their civilization originated from a group of human scientists who experimented with genetic engineering. Their experimentation led them to resurrect the extinct dinosaur species; however, their victory was short-lived, as a disastrous plague brought the end of their civilization itself. The few dinosaurs resurrected were lent an unusual amount of intelligence from their human creators, leaving them to expand on their growing civilization. However, as the Nanosaurs were the only species on Earth, inbreeding was the only possible choice of reproduction. This method largely affected the intelligence of the various offspring, and slowly began to pose a threat to their once-intelligent society. The Nanosaur government offers a quest that involves time traveling into the year 65 million BC, where the five eggs of ancient dinosaur species must be retrieved and placed in a time portal leading to the present year. Their high-ranking agent, a brown Deinonychus Nanosaur, is chosen to participate in this mission. On the day of her mission, she is teleported to the past via a time machine in a Nanosaur laboratory. The Nanosaur arrives in a lush jungle, with twenty minutes given to collect the eggs before the meteor that caused the initial extinction of the dinosaur race hits the Earth. After battling various Tyrannosaurus rex's, the Nanosaur enters a volcanic crater, where she must cross several stone formations in a river of lava in order to retrieve the eggs. After making her way across the river, the Nanosaur detects the final eggs in a canyon oasis, where various dinosaurs, namely Dilophosaurus and Stegosaurus, are attempting to hinder her progress in order to protect their eggs. After evading defeat, the Nanosaur beams the final egg into the time portal, and is carried along with it back to the present. Following the completion of the Nanosaur's mission, the eggs are placed in nationwide laboratories, where the scientists intend on breeding them for their own purposes. Several months following this event, the eggs finally start to hatch. Gameplay The object of the game is to collect the eggs of five dinosaur and flying reptile species and deposit them in time portals to the future in twenty minutes; at the end of the countdown, the asteroid that caused the Cretaceous–Paleogene extinction event hits Earth. The Nanosaur is equipped with a "fusion blaster" (a basic multi-purpose energy weapon), a jet pack allowing flight, a temporal compass for locating time portals, and a GPS locator for navigation. The native animals will attack the Nanosaur when their eggs are threatened; species encountered include Tyrannosaurus rex, Stegosaurus, Triceratops, Dilophosaurus (who spits venom as in Jurassic Park), and the flying reptile Pteranodon. As well as hostile creatures, the Nanosaur must also avoid water and lava, environmental hazards which slow the player down (or kill it). Legacy The game was being ported to Linux by Three Axis Interactive, but the port was never completed. Around 2003 the source code of the game was made available by the developer under a restrictive license. In 2021 an open source version of the game utilizing OpenGL was ported to modern systems (Windows, Mac, and Linux) with the developer's blessing by Iliyas Jorio. Jorio had previously ported their other title, Bugdom, to modern systems in 2020. Nanosaur Extreme is another version of Nanosaur, released at a later time with heftier system requirements. It has many more enemies and weapons than Nanosaur, and it is described on the Nanosaur downloads page as "what Nanosaur was meant to be – a total kill-fest". Nanosaur 2: Hatchling, a continuation of the original Nanosaur storyline, was released in March 2004. Nanosaur 2 is the first stereoscopic game released for the Mac. References External links Official website 1998 video games Dinosaurs in video games Classic Mac OS games Pangea Software Role-playing video games Third-person shooters Video games developed in the United States Video games set in the future Windows games Cancelled Linux games Commercial video games with freely available source code
1012806
https://en.wikipedia.org/wiki/Click%20fraud
Click fraud
Click fraud is a type of fraud that occurs on the Internet in pay-per-click (PPC) online advertising. In this type of advertising, the owners of websites that post the ads are paid based on how many site visitors click on the ads. Fraud occurs when a person, automated script, computer program or an auto clicker imitates a legitimate user of a web browser, clicking on such an ad without having an actual interest in the target of the ad's link. Click fraud is the subject of some controversy and increasing litigation due to the advertising networks being a key beneficiary of the fraud. Media entrepreneur and journalist John Battelle describes click fraud as the intentionally malicious, "decidedly black hat" practice of publishers gaming paid search advertising by employing robots or low-wage workers to click on ads on their sites repeatedly, thereby generating money to be paid by the advertiser to the publisher and to any agent the advertiser may be using. Pay-per-click advertising PPC advertising is an arrangement in which webmasters (operators of websites), acting as publishers, display clickable links from advertisers in exchange for a charge per click. As this industry evolved, a number of advertising networks developed, which acted as middlemen between these two groups (publishers and advertisers). Each time a (believed to be) valid Web user clicks on an ad, the advertiser pays the advertising network, which in turn pays the publisher a share of this money. This revenue-sharing system is seen as an incentive for click fraud. The largest of the advertising networks, Google's AdWords/AdSense and Yahoo! Search Marketing, act in a dual role, since they are also publishers themselves (on their search engines). According to critics, this complex relationship may create a conflict of interest. This is because these companies lose money to undetected click fraud when paying out to the publisher but make more money when collecting fees from the advertiser. Because of the spread between what they collect and pay out, unfettered click fraud would create short-term profits for these companies. Non-contracting parties A secondary source of click fraud is non-contracting parties, who are not part of any pay-per-click agreement. This type of fraud is even harder to police, because perpetrators generally cannot be sued for breach of contract or charged criminally with fraud. Examples of non-contracting parties are: Competitors of advertisers: These parties may wish to harm a competitor who advertises in the same market by clicking on their ads. The perpetrators do not profit directly but force the advertiser to pay for irrelevant clicks, thus weakening or eliminating a source of competition. Competitors of publishers: These persons may wish to frame a publisher. It is made to look as if the publisher is clicking on its own ads. The advertising network may then terminate the relationship. Many publishers rely exclusively on revenue from advertising and could be put out of business by such an attack. Other malicious intent: As with vandalism, there are many motives for wishing to cause harm to either an advertiser or a publisher, even by people who have nothing to gain financially. Motives include political and personal vendettas. These cases are often the hardest to deal with, since it is difficult to track down the culprit, and if found, there is little legal action that can be taken against them. Friends of the publisher: Sometimes upon learning a publisher profits from ads being clicked, a supporter of the publisher (like a fan, family member, political party supporter, charity patron or personal friend) will click on the ads to help. This can be considered patronage. However, this can backfire when the publisher (not the friend) is accused of click fraud. Advertising networks may try to stop fraud by all parties but often do not know which clicks are legitimate. Unlike fraud committed by the publisher, it is difficult to know who should pay when past click fraud is found. Publishers resent having to pay refunds for something that is not their fault. However, advertisers are adamant that they should not have to pay for phony clicks. Organization Click fraud can be as simple as one person starting a small Web site, becoming a publisher of ads, and clicking on those ads to generate revenue. Often the number of clicks and their value is so small that the fraud goes undetected. Publishers may claim that small amounts of such clicking is an accident, which is often the case. Much larger-scale fraud also occurs in cybercrime communities. According to Jean-Loup Richet, Professor at the Sorbonne Business School, click fraud is frequently one link in the large ad fraud chain, and can be leveraged as part of a larger identity fraud and/or attribution fraud. Those engaged in large-scale fraud will often run scripts which simulate a human clicking on ads in Web pages. However, huge numbers of clicks appearing to come from just one, or a small number of computers, or a single geographic area, look highly suspicious to the advertising network and advertisers. Clicks coming from a computer known to be that of a publisher also look suspicious to those watching for click fraud. A person attempting large-scale fraud, from one computer, stands a good chance of being caught. One type of fraud that circumvents detection based on IP patterns uses existing user traffic, turning this into clicks or impressions. Such an attack can be camouflaged from users by using 0-size iframes to display advertisements that are programmatically retrieved using JavaScript. It could also be camouflaged from advertisers and portals by ensuring that so-called "reverse spiders" are presented with a legitimate page, while human visitors are presented with a page that commits click fraud. The use of 0-size iframes and other techniques involving human visitors may also be combined with the use of incentivized traffic, where members of "Paid to Read" (PTR) sites are paid small amounts of money (often a fraction of a cent) to visit a website and/or click on keywords and search results, sometimes hundreds or thousands of times every day Some owners of PTR sites are members of PPC engines and may send many email ads to users who do search, while sending few ads to those who do not. They do this mainly because the charge per click on search results is often the only source of revenue to the site. This is known as forced searching, a practice that is frowned upon in the Get Paid To industry. Organized crime can handle this by having many computers with their own Internet connections in different geographic locations. Often, scripts fail to mimic true human behavior, so organized crime networks use Trojan code to turn the average person's machines into zombie computers and use sporadic redirects or DNS cache poisoning to turn the oblivious user's actions into actions generating revenue for the scammer. It can be difficult for advertisers, advertising networks, and authorities to pursue cases against networks of people spread around multiple countries. Impression fraud is when falsely generated ad impressions affect an advertiser's account. In the case of click-through rate based auction models, the advertiser may be penalized for having an unacceptably low click-through for a given keyword. This involves making numerous searches for a keyword without clicking of the ad. Such ads are disabled automatically, enabling a competitor's lower-bid ad for the same keyword to continue, while several high bidders (on the first page of the search results) have been eliminated. Hit inflation attack A hit inflation attack is a kind of fraudulent method used by some advertisement publishers to earn unjustified revenue on the traffic they drive to the advertisers’ Web sites. It is more sophisticated and harder to detect than a simple inflation attack. This process involves the collaboration of two counterparts, a dishonest publisher, P, and a dishonest Web site, S. Web pages on S contain a script that redirects the customer to P's Web site, and this process is hidden from the customer. So, when user U retrieves a page on S, it would simulate a click or request to a page on P's site. P's site has two kinds of webpages: a manipulated version, and an original version. The manipulated version simulates a click or request to the advertisement, causing P to be credited for the click-through. P selectively determines whether to load the manipulated (and thus fraudulent) script to U's browser by checking if it was from S. This can be done through the Referrer field, which specifies the site from which the link to P was obtained. All requests from S will be loaded with the manipulated script, and thus the automatic and hidden request will be sent. This attack will silently convert every innocent visit to S to a click on the advertisement on P's page. Even worse, P can be in collaboration with several dishonest Web sites, each of which can be in collaboration with several dishonest publishers. If the advertisement commissioner visits the Web site of P, the non-fraudulent page will be displayed, and thus P cannot be accused of being fraudulent. Without a reason for suspecting that such collaboration exists, the advertisement commissioner has to inspect all the Internet sites to detect such attacks, which is infeasible. Another proposed method for detection of this type of fraud is through use of association rules. Manipulation of organic search results One major factor that affects the ranking of websites in organic search results is the CTR (Click-through Rate). That is the ratio of clicks to impressions, or in other words how many times a search result is clicked on, as compared to the number of times the listing appears in search results. In contrast to PPC fraud, where a competitor leverages the services of a botnet, or low cost labour, to generate false clicks, in this case the objective is to adopt a "beggar thy neighbour" policy against competitors by making their CTR rate as low as possible, thereby diminishing their position in search results. Bad actors will therefore generate false clicks on organic search results that they wish to promote, while avoiding search results they wish to demote. This technique can effectively create a cartel of business services controlled by the same bad actor, or be used to promote a certain political opinion etc. The scale of this issue is unknown, but is certainly evident to many website developers who pay close attention to the statistics in webmaster tools. Google Search has been accused of using so called zero-click search to prevent large part of the traffic leaving its page to third-party publishers. In 2015 Google introduced Knowledge Graph and Direct Answers feature which consist of a large box on the main results page, with the key information obtained from third-party sources such as Wikipedia, dictionaries, weather websites etc. As result 71% searches end on the Google search page. In case of one specific query out of 890,000 searches on Google, only 30,000 resulted in the user clicking on the results website. Legal cases Lawsuits Disputes over the issue have resulted in a number of lawsuits. In one case, Google (acting as both an advertiser and advertising network) won a lawsuit against a Texas company called Auction Experts (acting as a publisher), which Google accused of paying people to click on ads that appeared on Auction Experts' site, costing advertisers $50,000. Despite networks' efforts to stop it, publishers are suspicious of the motives of the advertising networks, because the advertising network receives money for each click, even if it is fraudulent. In July 2005, Yahoo settled a class-action lawsuit against it by plaintiffs alleging it did not do enough to prevent click fraud. Yahoo paid $4.5 million in legal bills for the plaintiffs and agreed to settle advertiser claims dating back to 2004 In July 2006, Google settled a similar suit for $90 million. On March 8, 2006, Google agreed to a $90 million settlement fund in the class-action lawsuit filed by Lane's Gifts & Collectibles. The class-action lawsuit was filed in Miller County, Arkansas, by Dallas attorneys Steve Malouf, Joel Fineberg, and Dean Gresham. The expert witness for the Plaintiffs in the case was Jessie Stricchiola, an internet search expert who first identified instances of PPC fraud in 2001. Michael Anthony Bradley In 2004, California resident Michael Anthony Bradley created Google Clique, a software program that he claimed could let spammers defraud Google out of millions of dollars in fraudulent clicks, which ultimately led to his arrest and indictment. Bradley was able to demonstrate that fraud was possible, and was impossible for Google to detect. The Department of Justice alleged that he contacted Google saying that unless they paid him $100,000 for the rights to the technology, he would sell it to spammers, costing Google millions. As a result, Bradley was arrested for extortion and mail fraud in 2006. Charges were dropped without explanation on November 22, 2006; both the US Attorney's office and Google declined to comment. Business Week suggests that Google was unwilling to cooperate with the prosecution, as it would be forced to disclose its click fraud detection techniques publicly. Fabio Gasperini On June 18, 2016, Fabio Gasperini, an Italian citizen, was extradited to the United States on click fraud charges. An indictment charged Gasperini with: two counts of computer intrusion one count of wire fraud one count of wire fraud conspiracy and one count of money laundering According to the U.S. government, Gasperini set up and operated a botnet of over 140,000 computers around the world. This was the first click fraud trial in the United States. If convicted of all counts, Gasperini risked up to 70 years in jail. Simone Bertollini, an Italian-American lawyer, represented Gasperini at trial. On August 9, 2017 a jury acquitted Gasperini of all the felony charges of the indictment. Gasperini was convicted of one misdemeanor count of obtaining information without a financial gain. Gasperini was sentenced to the statutory maximum of one year imprisonment, a $100,000 fine, and one year of supervised release following incarceration. Shortly after he was credited with time served and sent back to Italy. An appeal is currently pending. Solutions Proving click fraud can be very difficult since it is hard to know who is behind a computer and what their intentions are. When it comes to mobile ad fraud detection, data analysis can give some reliable indications. Abnormal metrics can hint at the presence of different types of frauds. To detect click fraud in the ad campaign, advertisers can focus on the following attribution points IP Address: As bots run similar scripts from the same server, any click fraud on mobile ads will indicate a high density of clicks coming from the same IP address or a range of similar IP addresses. Advertisers can also run check on IP addresses to verify their history with another fraud. Click Timestamp: Click timestamp maintains the time at which the click is made on the ad. The bot-based click fraud runs repeatedly to attempt clicking on the ads, which increases click frequency for that duration. A high range of clicks with almost similar timestamp points at the possibility of click fraud. A low duration and high frequency mean a high probability of fraud. Action Timestamp: Action timestamp is the time at which the user takes action on (or engages with) the app or website. With a bot-based click attack, there can be a similarity with action timestamp. As bot clicks on the advertisement and then performs the action on app or website without any delay, the advertiser can notice a low or almost no action timestamp. Often the best an advertising network can do is to identify which clicks are most likely fraudulent and not charge the account of the advertiser. Even more sophisticated means of detection are used, but none are foolproof. The Tuzhilin Report produced as part of a click fraud lawsuit settlement, has a detailed and comprehensive discussion of these issues. In particular, it defines "the Fundamental Problem of invalid (fraudulent) clicks": "There is no conceptual definition of invalid clicks that can be operationalized [except for certain obviously clear cases]." "An operational definition cannot be fully disclosed to the general public because of the concerns that unethical users will take advantage of it, which may lead to a massive click fraud. However, if it is not disclosed, advertisers cannot verify or even dispute why they have been charged for certain clicks." The PPC industry is lobbying for tighter laws on the issue. Many hope to have laws that will cover those not bound by contracts. A number of companies are developing viable solutions for click fraud identification and are developing intermediary relationships with advertising networks. Such solutions fall into two categories: Forensic analysis of advertisers' web server log files.This analysis of the advertiser's web server data requires an in-depth look at the source and behavior of the traffic. As industry standard log files are used for the analysis, the data is verifiable by advertising networks. The problem with this approach is that it relies on the honesty of the middlemen in identifying fraud. Third-party corroboration.Third parties offer web-based solutions that might involve placement of single-pixel images or Javascript on the advertiser's web pages and suitable tagging of the ads. The visitor may be presented with a cookie. Visitor information is then collected in a third-party data store and made available for download. The better offerings make it easy to highlight suspicious clicks, and they show the reasons for such a conclusion. Since an advertiser's log files can be tampered with, their accompaniment with corroborating data from a third-party forms a more convincing body of evidence to present to the advertising network. However, the problem with third-party solutions is that such solutions see only part of the traffic of the entire network. Hence, they can be less likely to identify patterns that span several advertisers. In addition, due to the limited amount of traffic they receive when compared to middlemen, they can be overly or less aggressive when judging traffic to be fraud. In a 2007 interview in Forbes, Google click fraud czar Shuman Ghosemajumder said that one of the key challenges in click fraud detection by third-parties was access to data beyond clicks, notably, ad impression data. Click fraud is less likely in cost per action models. Research The fact that the middlemen (search engines) have the upper hand in the operational definition of invalid clicks is the reason for the conflict of interest between advertisers and the middlemen, as described above. This is manifested in the Tuzhilin Report as described above. The Tuzhilin report did not publicly define invalid clicks and did not describe the operational definitions in detail. Rather, it gave a high-level picture of the fraud-detection system and argued that the operational definition of the search engine under investigations is "reasonable". One aim of the report was to preserve the privacy of the fraud-detection system in order to maintain its effectiveness. This prompted some researchers to conduct public research on how the middlemen can fight click fraud. Since such research is presumably not tainted by market forces, there is hope that this research can be adopted to assess how rigorous a middleman is in detecting click fraud in future law cases. The fear that this research can expose the internal fraud-detection system of middlemen still applies. An example of such research is that done by Metwally, Agrawal and El Abbadi at UCSB. Other work by Majumdar, Kulkarni, and Ravishankar at UC Riverside proposes protocols for the identification of fraudulent behavior by brokers and other intermediaries in content-delivery networks. See also Botnet Click farm Clickbot.A Trojan.Win32.DNSChanger ZeroAccess botnet Zombie (computer science) References External links "Truth in advertising", The Economist, November 23, 2006. "Google CFO: Fraud a Big Threat", CNN Money. Retrieved December 2, 2004. "How Click Fraud Could Swallow the Internet", Wired Magazine, issue 14.01 (January 2006). Retrieved December 29, 2005. Simone Soubusta: "On Click Fraud", Retrieved March 2014. "Click fraud uncovered - Fiverr advertisers willing to click on ads for $5. Retrieved April 2015 "How Google Ads is fighting click fraud" Retrieved February 2019 Consumer fraud Online advertising Internet fraud
1834679
https://en.wikipedia.org/wiki/StankDawg
StankDawg
David Blake (born 1971), also known as StankDawg, is the founder of the hacking group Digital DawgPound (DDP) and a long-time member of the hacking community. He is known for being a regular presenter at multiple hacking conferences, but is best known as the creator of the "Binary Revolution" initiative, including being the founding host and producer of Binary Revolution Radio, a long-running weekly Internet radio show which ran 200 episodes from 2003 to 2007. Biography Blake was born in Newport News, Virginia on September 13, 1971. He received an AAS (Associates in Applied Sciences) degree from the University of Kentucky 1992, and has a BS in Computer Science from Florida Atlantic University as well as a CEH certificate. He presently lives and works as a computer programmer/analyst in Orlando, Florida. Blake is a member of the International High IQ society. Hacking StankDawg is a staff writer for the well-known hacker periodical 2600: The Hacker Quarterly, as well as the now-defunct Blacklisted! 411 magazine. He has also been a contributing writer to several independent zines such as Outbreak, Frequency, and Radical Future. He has been a frequent co-host of Default Radio and was a regular on Radio Freek America, and has appeared on GAMERadio, Infonomicon, The MindWar, Phreak Phactor, and HPR (Hacker Public Radio). He has presented at technology conferences such as DEF CON, H.O.P.E., and Interz0ne. He has been very outspoken about many topics, many of which have gotten some negative feedback from different sources. His most controversial article was entitled "Hacking google Adwords" at DefCon13 which drew criticism from such people as Jason Calacanis. among others. His presentation at the fifth H.O.P.E. conference drew some surprise from the AS/400 community. StankDawg appeared as a subject on the television show The Most Extreme on Animal Planet where he demonstrated the vulnerabilities of wireless internet connections. Blake chose the handle "StankDawg" in college, where he started a local hacking group which became known as the "Digital DawgPound". Digital DawgPound The Digital DawgPound (more commonly referred to as the "DDP") is a group of hackers, best known for a series of articles in hacker magazines such as 2600: The Hacker Quarterly and Make, the long-running webcast Binary Revolution Radio, and a very active set of forums with posts from high-profile hackers such as Strom Carlson, decoder, Phiber Optik and many more. The stated mission of the DDP is to propagate a more positive image of hackers than the negative mass media stereotype. The group welcomes new members who want to learn about hacking, and attempts to teach them more positive aspects and steer them away from the negative aspects by reinforcing the hacker ethic. Their goal is to show that hackers can, and regularly do, make positive contributions not only to technology, but to society as a whole. History The DDP was founded and named by StankDawg. His stated reasons were that he had made many friends in the hacking scene and thought that it would be useful to have everyone begin working together in a more organized fashion. He was motivated by the fact that there had been other well-known Hacker Groups in the 1980s who had accomplished great things in the hacking world such as the LoD and the MoD. In 1988, while a junior in high school, StankDawg came up with the name on his way to the "Sweet 16" computer programming competition. He jokingly referred to his teammates as "The Digital Dawgpound". StankDawg lurked in the shadows of the hacking world for many years throughout college under many different pseudonyms. In 1997 he popped his head out into the public and began becoming more active on IRC and many smaller hacking forums. He saw some people who he thought were insanely brilliant individuals who seemed to have the same mindset and positive attitude towards hacking that he did so he decided to approach a couple of them to see if anyone would be interested in forming a group and working together. There was always a huge emphasis not only on technical competence and variety, but also on strength of character and integrity. The team was a mix of hackers, programmers, phone phreakers, security professionals, and artists. They had experience in multiple programming languages and operating systems. DDP members are not only good programmers and hackers, but more importantly, good people. By 1999 the DDP had its first official members and from this partnership, creativity flowed. The DDP communicated and worked together on StankDawg's personal site, which was open to anyone who wanted to join in on the fun. StankDawg was never comfortable with the fact that it was his name that was on the domain and that many people who were coming to the site were coming because of his articles or presentations but not really appreciating all of the other great contributions from other community members that were around. In 2002, after watching the web site grow quickly, it was decided that a new community needed to be created for these like-minded hackers who were gathering. This was the start of the biggest DDP project called Binary Revolution which was an attempt at starting a true "community" of hackers. As the site grew, so did the DDP roster. Members Over the years, DDP membership has included several staff writers for 2600: The Hacker Quarterly and Blacklisted! 411 magazine including StankDawg and bland_inquisitor. They frequently publish articles, provide content, and appear on many media sources across the global Interweb. DDP members are also regular speakers at hacking conferences such as DEF CON, H.O.P.E., Interzone, Notacon, and many more smaller and more regional cons. Some DDP members hold memberships in Mensa and the International High IQ society. StankDawg is very proud of the diversity of the team and has spoken to this many times on Binary Revolution Radio. Members are from both coasts of the United States to Europe and have even had members from Jamaica, Brazil, and many other countries. Recognition The DDP maintains a blog "which they refer to as a "blawg" (Obviously a play on the intentionally misspelled word "Dawg"). Posts by DDP members have been featured on other technology-related sites such as those of Make Magazine, HackADay, Hacked Gadgets, and others. Binary Revolution In 2003, StankDawg moved the forums from his personal site over to a new site as part of a project called the Binary Revolution which he considered a "movement" towards a more positive hacking community. This "Binary Revolution" is the best known of the DDP projects and is commonly referred to simply as "BinRev". This project was created in an attempt to bring the hacking community back together, working towards a common, positive goal of reclaiming the name of hackers. The Binary Revolution emphasizes positive aspects of hacking and projects that help society. It does this in a variety of outlets including monthly meetings, the weekly radio show Binary Revolution Radio(BRR), a video-based series of shows called HackTV, and very active message board forums. BinRev is more than just a radio show or forums, although they are certainly the most well-known of the projects. It is actually composed of many parts. Binary Revolution Radio Binary Revolution Radio, often shortened to "BRR", was one part of the binrev community. Started and hosted by Blake in 2003, it featured different co-hosts each week, and covered different aspects of hacker culture and computer security. It was broadcast via internet stream, usually prerecorded in Florida on a weekend, and then edited and released on the following Tuesday, on the DDP Hack Radio stream at 9:30pm EST. Topics included phreaking, identity theft, cryptography, operating systems, programming languages, free and open source software, Wi-Fi and bluetooth, social engineering, cyberculture, and information about various hacker conventions such as PhreakNIC, ShmooCon, H.O.P.E., and Def Con. In July 2005 Blake announced that he was going to take a break, and so for the third season, the show was produced by Black Ratchet and Strom Carlson (who had been frequent co-hosts during Blake's run). During the time that they hosted the program, the format rotated between the standard prerecorded format, and a live format which included phone calls from listeners. Blake returned to the show in May 2006. He maintained the prerecorded format, and brought more community input into the show, by bringing on more members of the Binary Revolution community. For the first episode of the fourth season, BRR had its first ever broadcast in front of live audience during the HOPE 6 convention in New York City, June 2006. The final episode, #200, took place on October 30, 2007, with a marathon episode which clocked in at 7 hours and 12 minutes. Notable co-hosts Billy Hoffman as ("Acidus") Tom Cross (as "Decius") Elonka Dunin Jason Scott Lance James Mark Spencer Virgil Griffith MC Frontalot Lucky225 Strom Carlson Black Rachet BinRev Meetings As the forums grew there were many posts where people were looking for others in their area where other hacker meetings did not exist or were not approachable for some reason. Those places that did have meetings were sparse on information. Binary Revolution meetings were started as an answer to these problems and as a place for our forum members to get together. BinRev meetings offer free web hosting for all meetings to help organize the meetings and keep communications alive and to help projects to grow. Some meetings are in large cities like Chicago and Orlando while others are in small towns. Anyone can start their own BinRev meeting by asking in the BinRev forums. BinRev.net "BRnet" is the official IRC network of Binary Revolution. It is active at all hours of the day and contains a general #binrev channel but also contains many other channels for more specific and productive discussion. HackTV In the middle of 2003, he released an Internet video show entitled "HackTV" which was the first internet television show about hacking, and which has grown into a series of several different shows. They were released irregularly since most of the episodes were filmed by StankDawg in South Florida where he lived at the time. They wanted the show to appear professional in terms of quality, but this made cooperating over the internet difficult. Sharing large video files was difficult and encoded video caused editing problems and quality concerns. The original show was released as full-length 30 minute episodes. This was also a problem since it because more and more difficult to get enough material for full-length episodes. There was also some content that was related to hacking only on a fringe level and StankDawg did not feel it was appropriate to include in the show. This led to other ideas. HackTV:Underground In light of the difficulties of putting together the full HackTV original show, and in an attempt to make the show more accessible for community contributions, StankDawg launched a new series that was less focused on format and video quality that focused more on content and ease of participation. This series was titled "HackTV:Underground" or "HTV:U" for short. This series allowed anyone to contribute content in any format and at any length or video quality. The allowed people to film things with basic cameraphone quality video if this was the only way to get the content. One episode of HackTV:U was used by G4techTV show called "Torrent". HackTV:Pwned This series of HackTV was a prank style show, similar to the popular "Punk'd" show on MTV at the time. Even the logo is an obvious parody of the Punk'd logo. This series contains pranks that mostly took place at conferences, but is also open to social engineering and other light-hearted content. DocDroppers The DocDroppers project is a community project to create a centralized place to store hacking articles and information while still maintaining some formatting and readability. Old ascii text files existed scattered across the internet but they come and go quickly and are difficult to find. They are usually formatted with the very basics and sometimes difficult to read. DocDroppers allows users to submit articles to a centralized place where they can be searchable, easily maintained, and easy to read and reference. Recently, this project has grown to include encyclopedia style entries on many hacking topics after many were deleted from sites such as Wikipedia. This has caused DocDroppers to include a section on hacker history and culture among its content. Selected writing "Stupid Webstats Tricks", Autumn 2005, 2600 Magazine "Hacking Google AdWords", Summer 2005, 2600 Magazine "Disposable Email Vulnerabilities", Spring 2005, 2600 Magazine "How to Hack The Lottery", Fall 2004, 2600 Magazine "Robots and Spiders", Winter 2003, 2600 Magazine "A History of 31337sp34k", Fall 2002, 2600 Magazine "Transaction Based Systems", Spring 2002, 2600 Magazine "Batch vs. Interactive", Summer 1999, 2600 Magazine Selected presentations "The Art of Electronic Deduction", July 2006, H.O.P.E. Number Six (presented again at Interz0ne 5, Saturday March 11, 2006) "Hacking Google AdWords", July 2005, DEF CON 13 "AS/400: Lifting the veil of obscurity", July 2004, The fifth H.O.P.E. Projects Projects that StankDawg was directly involved in creating/maintaining in addition to the ones mentioned above. DDP HackRadio - A streaming radio station with a schedule of hacking and tech related shows. Binary Revolution Magazine - The printed hacking magazine put out by the DDP. Hacker Events - A calendar for all hacking conferences, events, meetings, or other related gatherings. Hacker Media - A portal for all hacking, phreaking, and other related media shows. Phreak Phactor - The world's first Hacking reality radio show. WH4F - "Will Hack For Food" gives secure disposable temporary email accounts. References External links Other links that were mentioned or referred to in this entry: StankDawg's personal site. The Digital DawgPound - official site. BinRev IRC - Binary Revolution official IRC channel web site & BinRev IRC - Official Binary Revolution IRC network. HPR - "Hacker Public Radio" is a daily hacking and technology radio show created by the DDP, infonomicon and others. It has many different hosts. BRR Archive - Archive of the hacking radio show presented by members of the DDP (07/2003-10/2007). Binary Revolution Meetings - Monthly hacker meetings that encourage participation and offers free hosting for all meetings. HackTV - The Internet's first full-length regular Hacking video show. Old Skool Phreak - Home of many phreaking related text files. RFA Archive - Weekly Radio show about Technology, Privacy and Freedom (02/2002 - 02/2004). 1971 births Living people Computer programmers People associated with computer security Internet radio in the United States Phreaking People from Newport News, Virginia