id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
43595440 | https://en.wikipedia.org/wiki/Software%20implementation | Software implementation | Software implementation may refer to:
Software implementation, a specific piece of software together with its features and quality aspects
Programming language implementation
Software construction
Computer programming
See also
Product software implementation method
Software features
Software quality
Reference implementation, software from which all other implementations are derived |
49416318 | https://en.wikipedia.org/wiki/Alexandra%20Elbakyan | Alexandra Elbakyan | Alexandra Asanovna Elbakyan (, born 6 November 1988) is a Kazakhstani computer programmer and creator of the website Sci-Hub, which provides free access to research papers without regard for copyright. According to a study published in 2018, Sci-Hub provides access to nearly all scholarly literature.
Elbakyan has been described as "Science's Pirate Queen". In 2016, Nature included her in their top ten people that mattered in science list.
Since 2011 she has been living in Russia.
Background
Elbakyan was born in Almaty, Kazakh Soviet Socialist Republic (then called Alma-Ata in the Soviet Union) on 6 November 1988. She identifies as multiracial, being of Armenian, Slavic, and Asian descent. Alexandra was raised by a single mother, who was an accomplished computer programmer.
Alexandra started programming at the age of 12, making web pages in HTML and later writing in PHP, Delphi, and Assembly languages. She attempted to create a Tamagotchi powered by artificial intelligence. She performed her first computer hack at the age of 14; using SQL injection, she obtained access to all logins and passwords of her home internet provider. Later, she discovered there were more vulnerabilities of the cross-site scripting type. She reported these issues to the internet provider, hoping to get a job with them, but this did not happen. Instead, the provider cut off her internet access. Alexandra wrote in her blog that she hacked the first publisher website when she was 15. The publisher was MIT Press, which published online books on neuroscience, but they were locked behind a paywall that she could not afford. Alexandra wrote a PHP program that exploited a vulnerability on the website to download paywalled books without payment.
In 2009, she obtained a Bachelor of Science degree in computer science from the Kazakh National Technical University, specializing in information security. She studied the possibility of using EEG brainwaves for authentication instead of using a password. While working on her thesis, Elbakyan discovered the paywall problem with accessing journal articles, as her university did not have access to many publications related to her work.
Alexandra became interested in developing brain–computer interfaces and in 2010 she joined the University of Freiburg to work on such a project, which eventually led to her summer internship in neuroscience at Georgia Institute of Technology in the United States. The same year, Elbakyan spoke at the Humanity+ Summit at Harvard on the topic "Brain-Computer Interfacing, Consciousness, and the Global Brain". Elbakyan's idea was to develop a new kind of brain-machine interface that would merge human and machine qualia. She also participated in the Towards a Science of Consciousness conference that was held in Tucson, Arizona with the poster "Consciousness in Mixed Systems: Merging Artificial and Biological Minds via Brain-Machine Interface".
From 2012 to 2014, she was a master's student at Higher School of Economics in Moscow, but then dropped out. According to a 2016 interview, her neuroscience research was on hold, but she was enrolled in a history of science master's program at a private university in an undisclosed location. Her thesis would focus on scientific communication. In 2019, she graduated from Saint Petersburg State University with a master's degree in linguistics. She currently lives in Moscow and is studying philosophy at the Russian Academy of Sciences.
Sci-Hub
According to Elbakyan, Sci-Hub is a simplified version of a Global Brain because it "connects [the] brains of many researchers."
Elbakyan developed Sci-Hub in 2011 when she was in Kazakhstan. It was characterized by Science correspondent John Bohannon as "an awe-inspiring act of altruism or a massive criminal enterprise, depending on whom you ask." Elbakyan has stated that the script was initially intended to make access to academic papers fast and convenient, without a global goal of making all science free.
When academic publisher Elsevier sued Sci-Hub in the US in 2015, Elbakyan wrote a letter to the judge, wherein she explained her motives for starting the project: she could not afford to pay for each of the hundreds of papers she needed for her research project, so she had to pirate them. She founded her website to help others in the same situation. In the letter Elbakyan has provided various arguments in support for her cause, such as Elsevier not being an author of papers, and not paying the authors, mentioning that "The general opinion in research community is that research papers should be distributed for free (open access), not sold".
Elsevier was granted an injunction against her and $15 million in damages. Following a lawsuit, Elbakyan remained in hiding due to the risk of extradition. There were also lawsuits against Sci-Hub and Elbakyan from other publishers and in other countries.
Recognition and awards
In December 2016, Nature named Alexandra Elbakyan as one of the 10 people who most mattered in science that year.
Researchers who use Sci-Hub often thank Elbakyan in the Acknowledgments section of their papers.
For her actions in creating Sci-Hub, Elbakyan has been called a hero, for example by Nobel laureate Randy Schekman. Ars Technica has compared her to Aaron Swartz, and The New York Times has compared her to Edward Snowden. Edward Snowden acknowledged Sci-Hub to be one of the most important websites for academics in the world. She has also been called a modern-day "Robin Hood" and a "Robin Hood of science" and "Science's Pirate Queen".
Elbakyan has several biological species named in her honor:
Idiogramma elbakyanae, a species of parasitoid wasps discovered by Russian and Mexican entomologists in 2017. Elbakyan was offended by this, saying that "the real parasites are scientific publishers, and Sci-Hub, on the contrary, fights for equal access to scientific information." The Russian entomologist responded that he supports Sci-Hub and naming was not an insult. The article says that "The species is named in honour of Alexandra Elbakyan (Kazakhstan/Russia), creator of the web-site Sci-Hub, in recognition of her contribution to making scientific knowledge available for all researchers."
Brachyplatystoma elbakyani, an extinct species of catfish discovered by Argentine paleontologists in 2020.
Spigelia elbakyanii, a species of flowering plant from Mexico discovered in 2020.
Amphisbaena elbakyanae, a species of worm lizard discovered in 2021.
Sibogasyrinx elbakyanae, a species of deep-sea snail discovered by researchers from Russia and France in 2021.
Elbakyan was nominated twice for John Maddox prize and made it to the final shortlist.
Some researchers say that Elbakyan deserves a Nobel prize for her work. Wildlife scientist T R Shankar Raman has stated in an interview that:I am not a fan of the Nobel Prizes, given they have their own biases and have failed to adequately acknowledge scientific contributions of women, for example. But given that its stated purpose is to award those who have conferred the greatest benefit to humankind, Alexandra Elbakyan certainly qualifies.
Views
Elbakyan is a strong supporter of the Open Access movement: according to her Sci-Hub is a true implementation of open access principle in science. She believes that science should be open to all and not behind paywalls.
She has described herself as a devout pirate and thinks that copyright law prevents the free exchange of information online and the free distribution of knowledge on the Internet. In 2018, she asked supporters of Sci-Hub to join their local Pirate Party in order to fight for copyright laws to be changed.
Elbakyan has stated that she is inspired by communist ideals and considers the common ownership of ideas to be essential for scientific progress. In her 2016 interview to Vox she said: "I like the idea of communism, and the idea that knowledge should be common and not intellectual property is very relevant. That is especially true for information. Research articles are used for communication in science. But the word "communication" implies common ownership by itself." She referenced the work of Robert Merton, who considered communism to be a part of scientific ethos. According to her, Sci-Hub is fighting for communism in science and against the current state of things when knowledge has become the private property of corporations, because knowledge belongs to everyone.
Elbakyan does not consider herself a strict Marxist. She wanted to join either the CPRF or Pirate Party of Russia, but was unable to as membership in political parties is restricted to those with Russian citizenship.
Elbakyan justified Sci-Hub by saying that lack of universal access to academic knowledge violates Article 27 of the United Nations’ Universal Declaration of Human Rights, which states that "everyone has the right freely to … share in scientific advancement and its benefits."
She has stated that she supports a strong state which can stand up to the Western world, and that she does not want "the scientists of Russia and of my native Kazakhstan to share the fates of the scientists of Iraq, Libya, and Syria, that were 'helped' by the United States to become more democratic." In 2012, she supported Putin's politics.
Controversies
Alexandra Elbakyan was in conflict with the liberal, pro-Western wing of the Russian scientific community. According to her interview, she was attacked on the Internet by 'science popularizers' who supported liberal views that led to Sci-Hub shutdown in Russia in 2017 for a few days. In particular, Elbakyan was strongly critical of the former Dynasty Foundation (shut down in 2015) and its associated figures. She believes that the foundation was politicized, tied to Russia's liberal opposition, and fit the legal definition of a "foreign agent". Dynasty's founder, in her opinion, financed researchers whose political views agreed with its own. Elbakyan states that after she began to investigate the foundation's activities and published her findings online, she became the target of a cyberharassment campaign by Dynasty's supporters.
In December 2019, The Washington Post reported that Elbakyan was under investigation by the US Justice Department for suspected ties to Russia's military intelligence arm, the GRU, to steal U.S. military secrets from defense contractors. Elbakyan has denied this, saying that Sci-Hub "is not in any way directly affiliated with Russian or some other country's intelligence," but noting that "of course, there could be some indirect help. The same as with donations, anyone can send them; they are completely anonymous, so I do not know who exactly is donating to Sci-Hub. There could be some help that I’m simply unaware of. I can only add that I write all of Sci-Hub code and design myself and I’m doing the server’s configuration."
On May 8, 2021, Elbakyan tweeted that the FBI had served a subpoena to Apple seeking her iCloud data. The tweet included a screengrab of the notice from Apple. The tweet was retweeted by Edward Snowden, who commented that "Members of Congress should be making calls about this. Journalists should be asking the White House and DOJ questions. The founder of Sci-hub — unquestionably one of the most important sites for academics in the world — should not be subject to persecution for their work"
Works
Elbakyan, Alexandra (2009) "Электроэнцефалограмма человека как биометрическая характеристика в системах контроля доступа" [Human EEG as a biometric feature in access control systems] Bachelor Thesis, Satbayev University.
Elbakyan, Alexandra (2019) "Образ Духа Божьего в текстах еврейской Библии" [Image of the Holy Spirit in Hebrew Bible texts] Master Thesis, Saint Petersburg State University.
See also
Peter Sunde
Library Genesis
Open Access
ICanHazPDF
Copyright abolition
Aaron Swartz
References
Further reading
Belluz, Julia (18 February 2016). "Meet Alexandra Elbakyan, the researcher who's breaking the law to make science free for all". Vox.
Murphy, Kate (12 March 2016). "Should All Research Papers Be Free?". The New York Times.
Nelson, Felicity (6 February 2019). "How one perplexing pirate is plundering the publishers". The Medical Republic.
Bozkurt, Aras (2021). "A Critical Conversation with Alexandra Elbakyan: Is she the Pirate Queen, Robin Hood, a Scholarly Activist, or a Butterfly Flapping its Wings?". Asian Journal of Distance Education.
Altınışık, Ezgi N. (27 February 2021) "A Robin Hood in the World of Science: Alexandra Elbakyan". Bilim ve Aydınlanma Akademisi.
External links
Engineuring – Elbakyan's blog
1988 births
21st-century women scientists
Copyright activists
Internet activists
Kazakhstani computer programmers
Kazakhstani transhumanists
Living people
Open content activists
Kazakhstani women computer scientists
Kazakhstani neuroscientists
Kazakhstani women neuroscientists
Kazakhstani people of Armenian descent
Computer security specialists
Kazakhstani activists
Kazakhstani women activists
Saint Petersburg State University alumni
Kazakhstani communists |
62142521 | https://en.wikipedia.org/wiki/Atta%20ur%20Rehman%20Khan | Atta ur Rehman Khan | Atta ur Rehman Khan (Urdu: عطا الرحمن خان) is a computer scientist and academician who has contributed to multiple domains of the field. According to a Stanford University report, he is among World's Top 2% Scientists. He is the founder of National Cyber Crime Forensics Lab Pakistan. The Cyber Crime Forensics Lab operates in partnership with NR3C. He has published numerous research articles and books. He is a Senior Member of IEEE (SMIEEE) and ACM (SMACM).
Education
Atta ur Rehman Khan was a Bright Sparks scholar and received his PhD degree in Computer Science from University of Malaya. He received his Masters and Bachelor's degree (with honors) in Computer Science from COMSATS University under COMSATS scholarship. He has also attended a summer camp on Advance Wireless Networks at Technische Universität Ilmenau under DAAD scholarship.
Experience
Atta ur Rehman Khan is currently working as an Associate Professor at the College of Engineering and Information Technology, Ajman University, United Arab Emirates. He has vast experience of teaching and research at different positions and has served at seven universities in five countries, namely Sohar University, Air University, King Saud University,COMSATS University, University of Malaya and Qurtuba University.
He was the founding director of National Cyber Crime Forensics Lab Pakistan and the Head of Air University Cybersecurity Center. He also developed Pakistan's first BS cybersecurity program approved by HEC.
Editorial Boards
Atta ur Rehman Khan is an editor of the following journals:
Associate Technical Editor, IEEE Communications Magazine.
Editor, Elsevier Journal of Network and Computer Applications.
Associate Editor, IEEE Access.
Associate Editor, Springer Journal of Cluster Computing.
Editor, SpringerPlus.
Editor, Ad hoc & Sensor Wireless Networks.
Editor, Oxford Computer Journal.
Editor, KSII Transactions on Internet and Information Systems.
Associate Editor, Springer Human-centric Computing and Information Sciences.
Awards
Atta ur Rehman Khan has received the following awards:
Best Paper Award, SPECTS, 2018.
Research Productivity Award, COMSATS University, Islamabad, Pakistan, 2016.
GoT Award, University of Malaya, Malaysia, 2014.
Research Productivity Award, COMSATS University, Islamabad, Pakistan, 2012.
Best Research Poster Award, Vision ICT, Pakistan, 2010.
Best Project Award, Vision ICT, Pakistan, 2009.
Best Project Award, Frontiers of Information Technology (FIT) Conference, Pakistan, 2008.
Books
Following is the list of books authored/co-authored/edited by Atta ur Rehman Khan:
"Internet of Things: Challenges, Advances, and Applications" by Chapman and Hall/CRC, , 2018.
Research Publications
Following is the list of some research papers authored/co-authored by Atta ur Rehman Khan:
"Vehicular Ad Hoc Network (VANET) Localization Techniques: A Survey, " in Archives of Computational Methods in Engineering, .
"DGRU based human activity recognition using channel state information," in Measurement, Vol. 167, 2021.
"Real-Time Fuel Truck Detection Algorithm Based on Deep Convolutional Neural Network," in IEEE Access, vol. 8, pp. 118808–118817, 2020.
"An energy, performance efficient resource consolidation scheme for heterogeneous cloud datacenters," in Journal of Network and Computer Applications, .
"A lightweight and compromise‐resilient authentication scheme for IoTs," in Transactions on Emerging Telecommunications Technologies, .
"Optimal Content Caching in Content-Centric Networks," in Wireless Communications and Mobile Computing,
"A Systems Overview of Commercial Data Centers: Initial Energy and Cost Analysis," in International Journal of Information Technology and Web Engineering, vol. 14, no. 1, pp. 41–65, 2019.
"CPU–RAM-based energy-efficient resource allocation in clouds," in The Journal of Superomputing, vol. 75, no. 11, pp. 7606–7624, 2019.
"A fog-based security framework for intelligent traffic light control system," in Multimedia Tools and Applications, vol. 78, no. 17, pp. 24595–24615, 2019.
"Anonymous and formally verified dual signature based online e-voting protocol," in Cluster Computing, vol. 22, no. 1, pp. 1703–1716, 2019.
"Identification of Yeast's Interactome using Neural Networks," in IEEE Access, 2019.
"Secure-CamFlow: A Device Oriented Security Model to Assist Information Flow Control Systems in Cloud Environments for IoTs," in Concurrency and Computation: Practice and Experience, vol. 31, no. 8, pp. 1–22, 2019.
"SocialRec: A Context-aware Recommendation Framework with Explicit Sentiment Analysis," in IEEE Access, 2019.
"A load balanced task scheduling heuristic for Large-scale Computing Systems," in International Journal of Computer Systems Science and Engineering, vol. 34, no. 2, pp. 1–12, 2019.
"Masquerading Attacks Detection in Mobile Ad Hoc Networks," in IEEE Access, vol. 6, no. 1, pp. 55013–55025, 2018.
"Performance Assessment of Dynamic Analysis Based Energy Estimation Tools," in International Symposium on Performance Evaluation of Computer and Telecommunication Systems, pp. 1–12, July 2018, France.
"An Optimal Ride Sharing Recommendation Framework for Carpooling Services," in IEEE Access, vol 6, no. 1, pp. 62296–62313, 2018.
“An Investigation of Video Communication over Bandwidth Limited Public Safety Network,” in Malaysian Journal of Computer Science, vol. 31, no. 2, pp. 85–107, 2018.
"There's No Such Thing as Free Lunch but Envy among Young Facebookers, " in KSII Transactions on Internet and Information Systems, vol. 12, no. 10, 2018.
"Salat Activity Recognition using Smartphone Triaxial Accelerometer," in 5th International Multi-Topic ICT Conference (IMTIC), April 2018.
"Computation Offloading Cost Estimation in Mobile Cloud Application Models," in Wireless Personal Communications, Springer, vol. 97, no. 3, pp. 4897–4920, 2017.
"Review and Performance Analysis of Position Based Routing in VANET," in Wireless Personal Communications, vol. 94, no. 3, pp. 559–578, 2017.
"Execution Models for Mobile Data Analytics," in IEEE IT Professional, vol. 19, no. 3, pp. 24–30, 2017.
"Formal Verification and Performance Evaluation of Task Scheduling Heuristics for Makespan Optimization and Workflow Distribution in Large-scale Computing Systems," in International Journal of Computer Systems Science and Engineering, vol. 32, no. 3, pp. 227–241, 2017.
" RedEdge: A Novel Architecture for Big Data Processing in Mobile Edge Computing Environments, " in Journal of Sensor and Actuator Networks, vol. 6, no. 3, 2017.
"A Comparative Study and Workload Distribution Model for Re-encryption Schemes in a Mobile Cloud Computing Environment, " in International Journal of Communication Systems, vol. 30, no. 16, 2017.
"Diet-Right: A Smart Food Recommendation System" in KSII Transactions on Internet and Information Systems, vol. 11, no. 6, pp. 2910–2925, 2017.
"A Survey of Mobile Virtualization: Taxonomy and State of the Art, " in ACM Computing Surveys, vol. 49, no. 1, 2016.
"Big Data Analytics in Mobile and Cloud Computing Environments," in Innovative Research and Applications in Next-Generation High Performance Computing, IGI Global, pp. 349–367, 2016.
"Code Offloading Using Support Vector Machine", in Proceedings of the Sixth IEEE International Conference on Innovative Computing Technology (INTECH), August 2016, pp. 98–103.
"Context-Aware Mobile Cloud Computing & Its Challenges," in IEEE Cloud Computing, vol. 2, no 3, pp. 42–49, May/June 2015.
"MobiByte: An Application Development Model for Mobile Cloud Computing," in Journal of Grid Computing, vol. 13, no. 4, pp. 605–628, 2015.
"Impact of Mobility on Energy and Performance of Clustering-Based Power-Controlled Routing Protocols," in Proceedings of the IEEE Frontiers of Information Technology, Islamabad, Pakistan, December 2015.
"Merging of DHT-based Logical Networks in MANETs," in Transactions on Emerging Telecommunications Technologies, vol. 26, no. 12, pp. 1347–1367, 2015.
"Resource Management in Cloud Computing: Taxonomy, Prospects and Challenges, " in Computers & Electrical Engineering, vol. 47, pp. 186–203, 2015.
"A Cloud-Manager-based Re-encryption Scheme for Mobile Users in Cloud Environment: A Hybrid Approach, " in Journal of Grid Computing, vol. 13, no. 4, 2015.
"3D-RP: A DHT-based Routing Protocol for MANETs," The Computer Journal, vol. 58, no. 2, 258-279, 2015.
"A Survey of Mobile Cloud Computing Application Models" in IEEE Communications Surveys & Tutorials, vol. 16, no. 1, pp. 393–413, 2014.
"Pirax: Framework for Application Piracy Control in Mobile Cloud Environment," in Journal of Super Computing, vol. 68, no. 2, pp. 753–776, 2014.
"Road Oriented Traffic Information System for Vehicular Ad hoc Networks," in Wireless Personal Communications, vol. 77, no. 4, pp. 2497–2515 , 2014.
"BSS: Block Based Sharing Scheme for Secure Data Storage Services in Mobile-Cloud Environment", in Journal of Super Computing, vol. 70, no. 2, pp. 946–976, 2014.
"Routing Protocols for Mobile Sensor Networks: A Comparative Study," in International Journal of Computer Systems Science and Engineering, vol. 29, no. 1, pp. 91–100, 2014.
"Incremental Proxy Re-encryption Scheme for Mobile Cloud Computing Environment," in Journal of Super Computing, vol. 68, no. 2, pp. 624–651, 2014.
"A Study of Incremental Cryptography for Security Schemes in Mobile Cloud Computing Environments," in Proceedings of the IEEE Symposium on Wireless Technology and Applications, Kuching, Malaysia, September 2013, pp. 62–67.
"Enhanced Dynamic Credential Generation Scheme for Protection of User Identity in Mobile Cloud Computing," in Journal of Super Computing, vol. 66, no. 3, pp. 1687–1706, 2013.
"Clustering-based Power-Controlled Routing for Mobile Wireless Sensor Networks," in International Journal of Communication Systems, vol. 25, no. 4, pp. 529–542, 2012.
"Impact of Mobility Models on Clustering based Routing Protocols in Mobile WSNs," in Proceedings of the IEEE Frontiers of Information Technology, Islamabad, Pakistan, December 2012, pp. 366–370.
"A Performance Comparison of Open Source Network Simulators for Wireless Networks," in Proceedings of the IEEE International Conference on Control System, Computing and Engineering, Penang, Malaysia, November 2012, pp. 34–38.
"Routing Proposals for Multipath Interdomain Routing," in Proceedings of the IEEE International Multi Topic Conference, Lahore, Pakistan, December 2012, pp. 331–337.
"Source Routing Proposals for Multipath Inter-domain Routing," in Proceedings of the International Conference on Future Trends in Computing and Communication Technologies, Malacca, Malaysia, December 2012, pp. 124–132.
References
Pakistani computer scientists
Living people
Senior Members of the ACM
Senior Members of the IEEE
Year of birth missing (living people) |
23533163 | https://en.wikipedia.org/wiki/Chrome%20OS | Chrome OS | Chrome OS (sometimes styled as chromeOS) is a Gentoo Linux-based operating system designed by Google. It is derived from the free software Chromium OS and uses the Google Chrome web browser as its principal user interface. Unlike Chromium OS, Chrome OS is proprietary software.
Google announced the project, based on Ubuntu, in July 2009, conceiving it as an operating system in which both applications and user data reside in the cloud: hence Chrome OS primarily runs web applications. Source code and a public demo came that November. The first Chrome OS laptop, known as a Chromebook, arrived in May 2011. Initial Chromebook shipments from Samsung and Acer occurred in July 2011.
Chrome OS has an integrated media player and file manager. It supports Progressive Web Apps and Chrome Apps; these resemble native applications, as well as remote access to the desktop. As more Chrome OS machines have entered the market, the operating system is now seldom evaluated apart from the hardware that runs it.
Android applications started to become available for the operating system in 2014, and in 2016, access to Android apps in Google Play's entirety was introduced on supported Chrome OS devices. Support for a Linux terminal and applications, known as Project Crostini, was released to the stable channel in 2018 with Chrome OS 69. This was made possible via a lightweight Linux kernel that runs containers inside a virtual machine.
History
Google announced Chrome OS on July 7, 2009, describing it as an operating system in which both applications and user data reside in the cloud. To ascertain marketing requirements, the company relied on informal metrics, including monitoring the usage patterns of some 200 Chrome OS machines used by Google employees. Developers also noted their own usage patterns. Matthew Papakipos, the former engineering director for the Chrome OS project, put three machines in his house and found himself logging in for brief sessions: to make a single search query or send a short email.
The initial builds of Chrome OS were based on Ubuntu, and its developer, Canonical, was an engineering partner with Google on the project. In 2010, Chrome OS moved to Gentoo Linux as its base to simplify its build process and support a variety of platforms. Sometime in 2013, Google switched Chrome OS to its own flavour of Linux.
Chrome OS was initially intended for secondary devices like netbooks, not as a user's primary PC. While Chrome OS supports hard disk drives, Google has requested that its hardware partners use solid-state drives "for performance and reliability reasons" as well as the lower capacity requirements inherent in an operating system that accesses applications and most user data on remote servers. In November 2009 Matthew Papakipos, engineering director for the Chrome OS, claimed that the Chrome OS consumes one-sixtieth as much drive space as Windows 7. The recovery images Google provides for Chrome OS range between 1 and 3 GB.
On November 19, 2009, Google released Chrome OS's source code as the Chromium OS project. At a November 19, 2009, news conference, Sundar Pichai, at the time Google's vice president overseeing Chrome, demonstrated an early version of the operating system. He previewed a desktop which looked very similar to the Chrome browser, and in addition to the regular browser tabs, also had application tabs, which take less space and can be pinned for easier access. At the conference, the operating system booted up in seven seconds, a time Google said it would work to reduce. Additionally, Chris Kenyon, vice president of OEM services at Canonical Ltd, announced that Canonical was under contract to contribute engineering resources to the project with the intent to build on existing open-source components and tools where feasible.
Early Chromebooks
In 2010, Google released the unbranded Cr-48 Chromebook in a pilot program. The launch date for retail hardware featuring Chrome OS was delayed from late 2010 until the next year. On May 11, 2011, Google announced two Chromebooks from Acer and Samsung at Google I/O. The Samsung model was released on June 15, 2011, but the Acer was delayed until mid-July. In August 2011, Netflix announced official support for Chrome OS through its streaming service, allowing Chromebooks to watch streaming movies and TV shows via Netflix. At the time, other devices had to use Microsoft Silverlight to play videos from Netflix. Later in that same month, Citrix released a client application for Chrome OS, allowing Chromebooks to access Windows applications and desktops remotely. Dublin City University became the first educational institution in Europe to provide Chromebooks for its students when it announced an agreement with Google in September 2011.
Expansion
By 2012, demand for Chromebooks had begun to grow, and Google announced a new range of devices, designed and manufactured by Samsung. In so doing, they also released the first Chromebox, the Samsung Series 3, which was Chrome OS's entrance into the world of desktop computers. Although they were faster than the previous range of devices, they were still underpowered compared to other desktops and laptops of the time, fitting in more closely with the Netbook market. Only months later, in October, Samsung and Google released a new Chromebook at a significantly lower price point ($250, compared to the previous Series 5 Chromebooks' $450). It was the first Chromebook to use an ARM processor, one from Samsung's Exynos line. To reduce the price, Google and Samsung also reduced the memory and screen resolution of the device. An advantage of using the ARM processor, however, was that the Chromebook didn't require a fan. Acer followed quickly after with the C7 Chromebook, priced even lower ($199), but containing an Intel Celeron processor. One notable way Acer reduced the cost of the C7 was to use a laptop hard disk rather than a solid-state drive.
In April 2012, Google made the first update to Chrome OS's user interface since the operating system had launched, introducing a hardware-accelerated window manager called "Aura" along with a conventional taskbar. The additions marked a departure from the operating system's original concept of a single browser with tabs and gave Chrome OS the look and feel of a more conventional desktop operating system. "In a way, this almost feels as if Google is admitting defeat here", wrote Frederic Lardinois on TechCrunch. He argued that Google had traded its original version of simplicity for greater functionality. "That's not necessarily a bad thing, though, and may just help Chrome OS gain more mainstream acceptance as new users will surely find it to be a more familiar experience." Lenovo and HP followed Samsung and Acer in manufacturing Chromebooks in early 2013 with their own models. Lenovo specifically targeted their Chromebook at students, headlining their press release with "Lenovo Introduces Rugged ThinkPad Chromebook for Schools".
When Google released Google Drive, they also included Drive integration in Chrome OS version 20, released in July 2012. While Chrome OS had supported Flash since 2010, by the end of 2012 it had been fully sandboxed, preventing issues with Flash from affecting other parts of Chrome OS. This affected all versions of Chrome including Chrome OS.
Chromebook Pixel
Up to this point, Google had never made their own Chrome OS device. Instead, Chrome OS devices were much more similar to their Nexus line of Android phones, with each Chrome OS device being designed, manufactured, and marketed by third-party manufacturers, but with Google controlling the software. However, in February 2013 this changed when Google released the Chromebook Pixel. The Chromebook Pixel was a departure from previous devices. Not only was it entirely Google-branded, but it contained an Intel i5 processor, a high-resolution (2,560 × 1,700) touchscreen display, and came at a price more competitive with business laptops.
Controversial popularity
By the end of 2013, analysts were undecided on the future of Chrome OS. Although there had been articles predicting the demise of Chrome OS since 2009, Chrome OS device sales continued to increase substantially year-over-year. In mid-2014, Time magazine published an article titled "Depending on Who's Counting, Chromebooks are Either an Enormous Hit or Totally Irrelevant", which detailed the differences in opinion. This controversy was further spurred by the fact that Intel seemed to decide Chrome OS was a beneficial market for it, holding their own Chrome OS events where they announced new Intel-based Chromebooks, Chromeboxes, and an all-in-one from LG called the Chromebase.
Seizing the opportunity created by the end of life for Windows XP, Google pushed hard to sell Chromebooks to businesses, offering significant discounts in early 2014.
Chrome OS devices outsold Apple Macs worldwide for the year 2020.
Pwnium competition
In March 2014, Google hosted a hacking contest aimed at computer security experts called "Pwnium". Similar to the Pwn2Own contest, they invited hackers from around the world to find exploits in Chrome OS, with prizes available for attacks. Two exploits were demonstrated there, and a third was demonstrated at that year's Pwn2Own competition. Google patched the issues within a week.
Material Design and app runtime for Chrome
Although the Google Native Client has been available on Chrome OS since 2010, there originally were few Native Client apps available, and most Chrome OS apps were still web apps. However, in June 2014, Google announced at Google I/O that Chrome OS would both synchronise with Android phones to share notifications and begin to run Android apps, installed directly from Google Play. This, along with the broadening selection of Chromebooks, provided an interesting future for Chrome OS.
At the same time, Google was also moving towards the then-new Material Design design language for its products, which it would bring to its web products as well as Android Lollipop. One of the first Material Design items to come to Chrome OS was a new default wallpaper, though Google did release some screenshots of a Material Design experiment for Chrome OS that never made it into the stable version.
Functionality for small and medium businesses and Enterprise
Chrome Enterprise
Chrome Enterprise, launched in 2017, includes Chrome OS, Chrome Browser, Chrome devices and their management capabilities intended for business use. Businesses can access the standard Chrome OS features and unlock advanced features for business with the Chrome Enterprise Upgrade. Standard features include the ability to sync bookmarks and browser extensions across devices, cloud or native printing, multi-layered security, remote desktop, and automatic updates. Advanced features include Active Directory integration, unified endpoint management, advanced security protection, access to device policies and Google Admin console, guest access, kiosk mode, and whitelisting or blacklisting third-party apps managed on Google Play.
The education sector was an early adopter of Chromebooks, Chrome OS, and cloud-based computing. Chromebooks are widely used in classrooms and the advantages of cloud-based systems have been gaining an increased share of the market in other sectors as well, including financial services, healthcare, and retail. "The popularity of cloud computing and cloud-based services highlights the degree to which companies and business processes have become both internet-enabled and dependent." IT managers cite a number of advantages of the cloud that have motivated the move. Among them are advanced security, because data is not physically on a single machine that can be lost or stolen. Deploying and managing cloud-native devices is easier because no hardware and software upgrades or virus definition updates are needed and patching of OS and software updates are simpler. Simplified and centralized management decreases operational costs.
Employees can securely access files and work on any machine, increasing the shareability of Chrome devices. Google's Grab and Go program with Chrome Enterprise allows businesses deploying Chromebooks to provide employees access to a bank of fully charged computers that can be checked out and returned after some time.
From Chromebooks to Chromebox and Chromebase
In an early attempt to expand its enterprise offerings, Google released Chromebox for Meetings in February 2014. Chromebox for Meetings is a kit for conference rooms containing a Chromebox, a camera, a unit containing both a noise-cancelling microphone and speakers, and a remote control. It supports Google Hangouts meetings, Vidyo video conferences, and conference calls from UberConference.
Several partners announced Chromebox for Meetings models with Google, and in 2016 Google announced an all-in-one Chromebase for Meetings for smaller meeting rooms. Google targeted the consumer hardware market with the release of the Chromebook in 2011 and Chromebook Pixel in 2013, and sought access to the enterprise market with the 2017 release of the Pixelbook. The second-generation Pixelbook was released in 2019. In 2021 there are several vendors selling all-in-one Chromebase devices.
Enterprise response to Chrome devices
Google has partnered on Chrome devices with several leading OEMs, including Acer, ASUS, Dell, HP, Lenovo, and Samsung.
In August 2019, Dell announced that two of its popular business-focused laptops would run Chrome OS and come with Chrome Enterprise Upgrade. The Latitude 5300 2-in-1 Chromebook Enterprise and Latitude 5400 Chromebook Enterprise were the result of a two-year partnership between Dell and Google. The machines come with a bundle of Dell's cloud-based support services that would enable enterprise IT managers to deploy them in environments that also rely on Windows. The new laptop line "delivers the search giant's Chrome OS operating system in a form tailored for security-conscious organizations." Other OEMs that have launched devices with Chrome Enterprise Upgrade include Acer and HP.
With a broader range of hardware available, Chrome OS became an option for enterprises wishing to avoid a migration to Windows 10 before Windows 7 support was discontinued by Microsoft.
Hardware
Laptops running Chrome OS are known collectively as "Chromebooks". The first was the CR-48, a reference hardware design that Google gave to testers and reviewers beginning in December 2010. Retail machines followed in May 2011. A year later, in May 2012, a desktop design marketed as a "Chromebox" was released by Samsung. In March 2015 a partnership with AOPEN was announced and the first commercial Chromebox was developed.
In early 2014, LG Electronics introduced the first device belonging to the new all-in-one form factor called "Chromebase". Chromebase devices are essentially Chromebox hardware inside a monitor with a built-in camera, microphone and speakers.
The Chromebit is an HDMI dongle running Chrome OS. When placed in an HDMI slot on a television set or computer monitor, the device turns that display into a personal computer. The first device, announced in March 2015 was an Asus unit that shipped that November and which reached end of life in November 2020.
Chromebook tablets were introduced in March 2018 by Acer with their Chromebook Tab 10. Designed to rival the Apple iPad, it had an identical screen size and resolution and other similar specifications, a notable addition was a Wacom-branded stylus that doesn’t require a battery or charging.
Chrome OS supports multi-monitor setups, on devices with a video-out port, USB 3.0 or USB-C, the latter being preferable.
On February 16, 2022, Google announced a development version of Chrome OS Flex—a distribution of Chrome OS that can be installed on conventional PC hardware to replace other operating systems such as Windows. It is similar to CloudReady, a distribution of Chromium OS whose developers were acquired by Google in 2020.
Software
The software and updates are limited in their support lifetime. Each device model manufactured to run Chrome OS has a different end-of-life date, with all new devices released in 2020 and beyond guaranteed to receive a minimum of eight years from their date of initial release.
As of Version 78, the device's end-of-life date for software updates is listed in "About Chrome OS"-"Additional Details".
Applications
Initially, Chrome OS was almost a pure web thin client operating system that relied primarily on servers to host web applications and related data storage. Google gradually began encouraging developers to create "packaged applications", and later, Chrome Apps. The latter employ HTML5, CSS, Adobe Shockwave, and JavaScript to provide a user experience closer to a native application.
In September 2014, Google launched App Runtime for Chrome (beta), which allowed certain ported Android applications to run on Chrome OS. Runtime was launched with four Android applications: Duolingo, Evernote, Sight Words, and Vine. In 2016, Google made Google Play available for Chrome OS, making most Android apps available for supported Chrome OS devices.
In 2018, Google announced plans for Chrome OS support for desktop Linux apps. This capability was released to the stable channel (as an option for most machines) with Chrome 69 in October 2018, but was still marked as beta. This feature was officially released with Chrome 91.
By default X11 is not used, while X11 apps can be run. Project Crostini makes X11 work (through Wayland).
Chrome Apps
From 2013 until January 2020, Google encouraged developers to build not just conventional Web applications for Chrome OS, but Chrome Apps (formerly known as Packaged Apps). In January 2020, Google's Chrome team announced its intent to phase out support for Chrome Apps in favor of "progressive web applications" (PWA) and Chrome extensions instead. In March 2020, Google stopped accepting new public Chrome Apps for the web store. According to Google, general support for Chrome Apps on Chrome OS will remain enabled, without requiring any policy setting, through June 2022.
From a user's perspective, Chrome Apps resemble conventional native applications: they can be launched outside of the Chrome browser, are offline by default, can manage multiple windows, and interact with other applications. Technologies employed include HTML5, JavaScript, and CSS.
Integrated media player, file manager
Google integrates a media player into both Chrome OS and the Chrome browser, enabling users to play back MP3s, view JPEGs, and handle other multimedia files while offline. It also supports DRM videos.
Chrome OS also includes an integrated file manager, resembling those found on other operating systems, with the ability to display directories and the files they contain from both Google Drive and local storage, as well as to preview and manage file contents using a variety of Web applications, including Google Docs and Box. Since January 2015, Chrome OS can also integrate additional storage sources into the file manager, relying on installed extensions that use the File System Provider API.
Remote application access and virtual desktop access
In June 2010, Google software engineer Gary Kačmarčík wrote that Chrome OS would access remote applications through a technology unofficially called "Chromoting", which would resemble Microsoft's Remote Desktop Connection. The name has since been changed to "Chrome Remote Desktop", and is like "running an application via Remote Desktop Services or by first connecting to a host machine by using RDP or VNC". Initial roll-outs of Chrome OS laptops (Chromebooks) indicate an interest in enabling users to access virtual desktops.
Android applications
At Google I/O 2014, a proof of concept showing Android applications, including Flipboard, running on Chrome OS was presented. In September 2014, Google introduced a beta version of the App Runtime for Chrome (ARC), which allows selected Android applications to be used on Chrome OS, using a Native Client-based environment that provides the platforms necessary to run Android software. Android applications do not require any modifications to run on Chrome OS, but may be modified to better support a mouse and keyboard environment. At its introduction, Chrome OS support was only available for selected Android applications.
In 2016, Google introduced the ability to run Android apps on supported Chrome OS devices, with access to Google Play in its entirety. The previous Native Client-based solution was dropped in favor of a container containing Android's frameworks and dependencies (initially based on Android Marshmallow), which allows Android apps to have direct access to the Chrome OS platform, and allow the OS to interact with Android contracts such as sharing. Engineering director Zelidrag Hornung explained that ARC had been scrapped due to its limitations, including its incompatibility with the Android Native Development Toolkit (NDK), and that it was unable to pass Google's own compatibility test suite.
Linux apps
All Chromebooks made since 2018, and many earlier models, can run Linux apps. As with Android apps, these apps can be installed and launched alongside other apps. Google maintains a list of devices that were launched before 2019, which support Linux apps.
Since 2013, it has been possible to run Linux applications in Chrome OS through the use of Crouton, a third-party set of scripts that allows access to a Linux distribution such as Ubuntu. However, in 2018 Google announced that desktop Linux apps were officially coming to Chrome OS. The main benefit claimed by Google of their official Linux application support is that it can run without enabling developer mode, keeping many of the security features of Chrome OS. It was noticed in the Chromium OS source code in early 2018. Early parts of Crostini were made available for the Google Pixelbook via the dev channel in February 2018 as part of Chrome OS version 66, and it was enabled by default via the beta channel for testing on a variety of Chromebooks in August 2018 with version 69.
Architecture
Google's project for supporting Linux applications in Chrome OS is called Crostini, named for the Italian bread-based starter, and as a pun on Crouton. Crostini runs a virtual machine through a virtual machine monitor called crosvm, which uses Linux's built-in KVM virtualization tool. Although crosvm supports multiple virtual machines, the one used for running Linux apps, Termina, contains a basic Chrome OS kernel and userland utilities, in which it runs containers based on Linux containers (specifically LXD).
Architecture
Chrome OS is built on top of the Linux kernel. Originally based on Ubuntu, its base was changed to Gentoo Linux in February 2010. For Project Crostini, as of Chrome OS 80, Debian 10 (Buster) is used. In preliminary design documents for the Chromium OS open-source project, Google described a three-tier architecture: firmware, browser and window manager, and system-level software and userland services.
The firmware contributes to fast boot time by not probing for hardware, such as floppy disk drives, that are no longer common on computers, especially netbooks. The firmware also contributes to security by verifying each step in the boot process and incorporating system recovery.
System-level software includes the Linux kernel that has been patched to improve boot performance. Userland software has been trimmed to essentials, with management by Upstart, which can launch services in parallel, re-spawn crashed jobs, and defer services in the interest of faster booting.
The window manager handles user interaction with multiple client windows (much like other X window managers).
Security
In March 2010, Google software security engineer Will Drewry discussed Chrome OS security. Drewry described Chrome OS as a "hardened" operating system featuring auto-updating and sandbox features that would reduce malware exposure. He said that Chrome OS netbooks would be shipped with Trusted Platform Module (TPM), and include both a "trusted boot path" and a physical switch under the battery compartment that activates a "developer mode". That mode drops some specialized security functions but increases developer flexibility. Drewry also emphasized that the open-source nature of the operating system would contribute greatly to its security by allowing constant developer feedback.
At a December 2010 press conference, Google declared that Chrome OS would be the most secure consumer operating system due in part to a verified boot ability, in which the initial boot code, stored in read-only memory, checks for system compromises. In the following nine years, Chrome OS has been affected by 55 documented security flaws of any severity, compared with over 1,100 affecting Microsoft Windows 10 in the five years to the end of 2019 and over 2,200 affecting Apple OS X in 20 years.
Shell access
Chrome OS includes the Chrome Shell, or "crosh", which documents minimal functionality such as ping at crosh start-up.
In developer mode, a full-featured bash shell (which is supposed to be used for development purposes) can be opened via VT-2, and is also accessible using the crosh command shell. To access full privileges in shell (e.g. sudo) a root password is requested. For some time the default was "chronos" in Chrome OS and "facepunch" in Chrome OS Vanilla and later the default was empty, and instructions on updating it were displayed at each login.
Open source
Chrome OS is partially developed under the open-source Chromium OS project. As with other open-source projects, developers can modify the code from Chromium OS and build their own versions, whereas Chrome OS code is only supported by Google and its partners and only runs on hardware designed for the purpose. Unlike Chromium OS, Chrome OS is automatically updated to the latest version.
Chrome OS on Windows
On Windows 8, exceptions allow the default desktop web browser to offer a variant that can run inside its full-screen "Metro" shell and access features such as the Share charm, without necessarily needing to be written with Windows Runtime. Chrome's "Windows 8 mode" was previously a tablet-optimized version of the standard Chrome interface. In October 2013, the mode was changed on Developer channel to offer a variant of the Chrome OS desktop.
Design
Early in the project, Google provided publicly many details of Chrome OS's design goals and direction, although the company has not followed up with a technical description of the completed operating system.
User interface
Design goals for Chrome OS's user interface included using minimal screen space by combining applications and standard Web pages into a single tab strip, rather than separating the two. Designers considered a reduced window management scheme that would operate only in full-screen mode. Secondary tasks would be handled with "panels": floating windows that dock to the bottom of the screen for tasks like chat and music players. Split screens were also under consideration for viewing two pieces of content side by side. Chrome OS would follow the Chrome browser's practice of leveraging HTML5's offline modes, background processing, and notifications. Designers proposed using search and pinned tabs as a way to quickly locate and access applications.
Version 19 window manager and graphics engine
On April 10, 2012, a new build of Chrome OS offered a choice between the original full-screen window interface and overlapping, re-sizable windows, such as found on Microsoft Windows and Apple's macOS. The feature was implemented through the Ash window manager, which runs atop the Aura hardware-accelerated graphics engine. The April 2012 upgrade also included the ability to display smaller, overlapping browser windows, each with its own translucent tabs, browser tabs that can be "torn" and dragged to new positions or merged with another tab strip, and a mouse-enabled shortcut list across the bottom of the screen. One icon on the task bar shows a list of installed applications and bookmarks. Writing in CNET, Stephen Shankland argued that with overlapping windows, "Google is anchoring itself into the past" as both iOS and Microsoft's Metro interface are largely or entirely full-screen. Even so, "Chrome OS already is different enough that it's best to preserve any familiarity that can be preserved".
Printing
Google Cloud Print is a Google service that helps any application on any device to print on supported printers. While the cloud provides virtually any connected device with information access, the task of "developing and maintaining print subsystems for every combination of hardware and operating system—from desktops to netbooks to mobile devices—simply isn't feasible." The cloud service requires installation of a piece of software called proxy, as part of the Chrome OS. The proxy registers the printer with the service, manages the print jobs, provides the printer driver functionality, and gives status alerts for each job.
In 2016, Google included "Native CUPS Support" in Chrome OS as an experimental feature that may eventually become an official feature. With CUPS support turned on, it becomes possible to use most USB printers even if they do not support Google Cloud Print.
Google announced that Google Cloud Print would no longer be supported after December 31, 2020, and that the online service would not be available as of January 1, 2021.
Link handling
Chrome OS was designed to store user documents and files on remote servers. Both Chrome OS and the Chrome browser may introduce difficulties to end-users when handling specific file types offline; for example, when opening an image or document residing on a local storage device, it may be unclear whether and which specific Web application should be automatically opened for viewing, or the handling should be performed by a traditional application acting as a preview utility. Matthew Papakipos, Chrome OS engineering director, noted in 2010 that Windows developers have faced the same fundamental problem: "Quicktime is fighting with Windows Media Player, which is fighting with Chrome."
Release channels and updates
Chrome OS uses the same release system as Google Chrome: there are three distinct channels: Stable, Beta, and Developer preview (called the "Dev" channel). The stable channel is updated with features and fixes that have been thoroughly tested in the Beta channel, and the Beta channel is updated approximately once a month with stable and complete features from the Developer channel. New ideas get tested in the Developer channel, which can be very unstable at times. A fourth canary channel was confirmed to exist by Google Developer Francois Beaufort and hacker Kenny Strawn, by entering the Chrome OS shell in developer mode, typing the command to access the bash shell, and finally entering the command . It is possible to return to the verified boot mode after entering the canary channel, but the channel updater disappears and the only way to return to another channel is using the "powerwash" factory reset.
Reception
At its debut, Chrome OS was viewed as a competitor to Microsoft, both directly to Microsoft Windows and indirectly the company's word processing and spreadsheet applications—the latter through Chrome OS's reliance on cloud computing. But Chrome OS engineering director Matthew Papakipos argued that the two operating systems would not fully overlap in functionality because Chrome OS is intended for netbooks, which lack the computational power to run a resource-intensive program like Adobe Photoshop.
Some observers claimed that other operating systems already filled the niche that Chrome OS was aiming for, with the added advantage of supporting native applications in addition to a browser. Tony Bradley of PC World wrote in November 2009:
In 2016, Chromebooks were the most popular computer in the US K–12 education market.
By 2017, the Chrome browser had risen to become the number one browser used worldwide.
In 2020, Chromebooks became the second most-popular end-user oriented OS (growing from 6.4% in 2019 to 10.8% in 2020). The majority of growth came at Windows expense (which fell from 85.4% in 2019 to 80.5% in 2021).
Relationship to Android
Google's offering of two open-source operating systems, Android and Chrome OS, has drawn some criticism despite the similarity between this situation and that of Apple Inc.'s two operating systems, macOS and iOS. Steve Ballmer, Microsoft CEO at the time, accused Google of not being able to make up its mind. Steven Levy wrote that "the dissonance between the two systems was apparent" at Google I/O 2011. The event featured a daily press conference in which each team leader, Android's Andy Rubin and Chrome's Sundar Pichai, "unconvincingly tried to explain why the systems weren't competitive". Google co-founder Sergey Brin addressed the question by saying that owning two promising operating systems was "a problem that most companies would love to face". Brin suggested that the two operating systems "will likely converge over time". The speculation over convergence increased in March 2013 when Chrome OS chief Pichai replaced Rubin as the senior vice president in charge of Android, thereby putting Pichai in charge of both.
The relationship between Android and Chrome OS became more substantial at Google I/O 2014, where developers demonstrated native Android software running on Chrome OS through a Native Client-based runtime. In September 2014, Google introduced a beta version of the App Runtime for Chrome (ARC), which allows selected Android applications to be used on Chrome OS, using a Native Client-based environment that provides the platforms necessary to run Android software. Android applications do not require any modifications to run on Chrome OS, but may be modified to better support a mouse and keyboard environment. At its introduction, Chrome OS support was only available for selected Android applications. In October 2015, The Wall Street Journal reported that Chrome OS would be folded into Android so that a single OS would result by 2017. The resulting OS would be Android, but it would be expanded to run on laptops. Google responded that while the company has "been working on ways to bring together the best of both operating systems, there's no plan to phase out Chrome OS".
In 2016, Google introduced the ability to run Android apps on supported Chrome OS devices, with access to Google Play in its entirety. The previous Native Client-based solution was dropped in favor of a container containing Android's frameworks and dependencies (initially based on Android Marshmallow), which allows Android apps to have direct access to the Chrome OS platform, and allow the OS to interact with Android contracts such as sharing. Engineering director Zelidrag Hornung explained that ARC had been scrapped due to its limitations, including its incompatibility with the Android Native Development Toolkit (NDK), and that it was unable to pass Google's own compatibility test suite.
See also
Comparison of operating systems
Google Fuchsia
List of operating systems
for information on typing diacritics (accents) and special symbols
Timeline of operating systems
Notes
References
External links
Official website
Official blog
Release blog
Chromium OS project page
Official announcement
Google Chrome OS Live Webcast; November 19, 2009
2011 software
ARM operating systems
Computer-related introductions in 2011
Google operating systems
Google
Mobile operating systems
Tablet operating systems
Operating systems based on the Linux kernel
Linux distributions without systemd
X86 operating systems
Proprietary operating systems
Linux distributions
Gentoo Linux derivatives |
202559 | https://en.wikipedia.org/wiki/Madras%20Institute%20of%20Technology | Madras Institute of Technology | Madras Institute of Technology (MIT) is an engineering institute located in Chromepet, Chennai, India. It is one of the four autonomous constituent colleges of Anna University. It was established in 1949 by Chinnaswami Rajam as the first self-financing engineering institute in the country and later merged with Anna University. The institute gave India new areas of specialization such as aeronautical engineering, automobile engineering, electronics engineering and instrumentation technology. MIT was the first self-financing institute opened in India.
MIT is also the first institute in India to offer postgraduate courses in Avionics and Mechatronics. The institute also has an unusual practice of "T-series": a student mentoring system by senior students.
MIT was started in 1949 and was offering a three-year (undergraduate programme) in Engineering for Science graduates (BSc). During the early years, the institute offered Diploma in Engineering to science graduates (DMIT). Subsequently, on the formation of Anna University in 1978, MIT became one of the constituent institutions of the university and hence, the department has also become a department of Anna University, they started awarding the Degree BTech, three-year Engineering Degree course to (BSc) science graduates. Over the years, the institute has expanded its original programme and now offers undergraduate and postgraduate courses in Production Engineering, Rubber and Plastics Technology, Computer Science Engineering and Information Technology. The institute accepts students who have passed the 12th board examinations for its four-year undergraduate programme from 1996.
History
After independence of India, the need for establishing a strong engineering institution for proper industrial development of the country was realized by Chinnaswami Rajam. In this effort, Mr.Rajam got assistance from a few distinguished citizens, notably Subbaraya Aiyar, M.K. Ranganathan, L.Venkatakrishna Iyer, K.Srinivasan and C.R. Srinivasan and also got generous donations from the public and industries. In July 1949, the MIT was established as a nationwide technological institution under the University of Madras.
Administration
Madras Institute of Technology is a constituent college of Anna University and is governed by a Chancellor, Vice-Chancellor and Registrar of the varsity. The varsity has a syndicate. The Dean is the head of the institute and oversees day-to-day administration. Each department is led by a Head of the Department (HOD). The hostels are headed by assistant executive wardens placed under an executive warden with Dean as the ex-officio warden
The academic policies are decided by the course committee, whose members include all the professors and student representatives headed by a chairman.
Madras Institute of Technology follows the credit-based system of performance evaluation and student-teacher relation, with proportional weighting of courses based on their importance. The total marks (usually out of 100) form the basis of grades, with a grade value (out of 10) assigned to a range of marks. For each semester, the students are graded by taking a weighted average of all the courses with their respective credit points. Each semester's evaluation is done independently with a cumulative grade point average (CGPA) reflecting the average performance across semesters.
The admission to the institute is through Tamil Nadu Engineering Admissions based on Unified Single Window Admission System for the first year of undergraduate courses. Most of the students from Tamil Nadu qualify from the Tamil Nadu Higher Secondary Board Examinations. Till 2006, TNPCEE was also a mandatory qualification examination along with the Higher Secondary examination.
Department of Aerospace Engineering – The Department of Aerospace Engineering was established in 1949 as faculty of Aeronautics in the Madras Institute of Technology Campus, for furthering the cause of Aerospace Research in the newborn nation. It is the first department offering undergraduate (UG) programme in Aeronautical Engineering in India. The department runs one full-time UG, three full-time post graduate (PG), one part-time UG and one part-time PG programmes. The department also offers MS(By research) and PhD research programs in various areas related to Aerospace Engineering.
Department of Automobile Engineering – This is also one of oldest departments offered by the college since its inception. The college offers both undergraduate and postgraduate courses. The annual student intake is 60. The Head of the Department is Dr S Jayaraj and the president of AEA (Automobile Engineering Association). The Department of Automobile Engineering at MIT was started in 1949 and was offering a three-year (undergraduate programme) in Automobile Engineering for Science graduates (BSc). Subsequently, on the formation of Anna University in 1978, MIT became one of the constituent institutions of the university and hence, the department has also become a department of Anna University. The postgraduate programme was started in 1978. This is the only pioneering Institute which is offering both undergraduate and postgraduate programmes in Automobile Engineering in the whole of India, besides offering MS (by research) and PhD Programmes. From 1996 onwards a four-year BTech undergraduate programme for students of higher secondary education is being offered. The department is involved in the programmes of the Centre for Automotive Research and Training (CART).
Department of Electronics & Communication Engineering – The Department of Electronics Engineering was established in 1949. It has its core strength in the area of Electronics & Communication. This is the largest department of the MIT campus of Anna University, which has about 25 faculty members serving about 400 undergraduate students and 100 postgraduate students. The research areas include Communication Technologies, Wireless Communication, Network Security, Sensor Networks, Optical Communication, Avionics, Signal Processing, Image Processing & Pattern Recognition and VLSI. The department has collaborative partners from academia and industry both locally and worldwide.
Department of Instrumentation Engineering – The Department of Instrumentation Engineering was established in the year 1949 at MIT campus in Anna University. Originally the Degree was offered as BTech, Instrument Technology after 1978. Later it was changed as BTech, Instrument Engineering, three-year Degree course for BSc Science graduates. Presently the Instrumentation Engineering Department offers Electronics and Instrumentation Engineering at UG level, 4-year degree course for 12th class passed students from 1996 onwards, Instrumentation Engineering at PG level, and PhD / MS (by research) for both regular and Part-time scholars. The core strength of the Instrumentation Engineering Department MIT Campus, Anna University is Process Control & Instrumentation.Currently Professor Dr. Srinivasan is the Head of the department.
Department of Information Technology- The Department of Information Tec.hnology was established in 2000. The branch of "Information Technology" has an intake strength of 120 every year (recently 2007-08 increased). The department includes an undergraduate and postgraduate programme running in parallel. It has both a full-time and part-time programme for the undergraduate programme. The research areas include Grid computing, XML technologies, Networking and connectivity protocols, cloud computing, network security and Databases. Currently Professor Dr Dhananjay Kumar is the head of the department.
Department of Computer Technology- The Department of Computer Technology was established in 2010. The branch of "Computer Technology" has an intake strength of 120 every year (undergraduate). The department includes an undergraduate and postgraduate programme running in parallel. The research areas include Cloud Computing, Grid Computing, XML technologies, Networking and connectivity protocols, network security and Databases, E-learning, Multimedia, and Video Streaming. Currently Professor Dr R Gunasekaran is the head of the department.
Department of Production Technology – The Department of Production Technology started with a three-year BTech Production Engineering programme (for BSc graduates) in 1977 with an intake of 20 students and an ME programme in 1993. The department offers B.E. in Production Engineering and B.E. Mechanical Engineering (started in 2015) and ME in Manufacturing Engineering, and Mechatronics Engineering. It offers part-time undergraduate programmes in the areas of Mechanical Engineering, Production Engineering and a postgraduate programme in Manufacturing Engineering. For the first time in India, a postgraduate programme in ME Mechatronics was started in 1999 with an intake of 15 students.
Department of Rubber & Plastics Technology – The Department of Rubber & Plastics is one of the pioneering departments of MIT, keeping to the tradition of MIT in being a non-conventional course offered under the Engineering curriculum. The department was started in 1988 and offers a BTech. Degree programme in Rubber & Plastics Technology and an MTech programme in Rubber Technology. The inter-disciplinary BTech programme boasts of successful alumni across the country in diverse Engineering fields ranging from Rubber and Plastics, Automobile Components, Mechanics as well as IT and Consulting industries besides a good number of them having proved as thriving Entrepreneurs. The department has the highest percentage of PhDs and industry experienced faculty among all the other departments of MIT.
Research centres
Centre for Aerospace Research: The Centre for Aerospace Research (CASR) came into being in 2000 and focuses on advancing research in aerospace sciences. The centre has a design, analysis and computational capabilities to take up problems in high speed and low speed flows, structure, aero-thermal effect and composites. The applications software being regularly employed include MSC-NASTRAN, STAR-CD, CFX, I_DEAS and tascflow. For carrying out experimental studies in the thrust areas, the centre has laboratories and facilities with specialised equipment. The country's first student-developed satellite ANUSAT is designed, developed and integrated at this facility. The aerospace department includes the avionics department which designs the control software.
Anna University – K B Chandrasekhar Research Centre (AU-KBCRC):The Anna University K B Chandrashekar Research Centre was established at the MIT Campus in May 1999 through the donation of 1.00 crore by Dr K B Chandrasekhar, chairman and co-founder of Exodus Communications. The infrastructure includes a 256 kbit/s Internet access over a 2 Mbit/s based line, high-end server with broad-based networking. The research activities of the centre are on cryptography, network security, internet access over the power line medium and many others. In addition, the KBC Foundation has donated another 0.5 crore rupees towards setting up a state-of–art Java lab at the centre and 1.5 crore rupees for the campus networking and video conferencing facilities. One of the goals of the centre is to link value-addition to the domain-knowledge possessed by the faculty and the students of our university in diverse domains through the use of the Internet and computer software.
Centre for Automotive Research and Training:The Centre for Automotive Research and Training (CART), an inter-disciplinary centre, was established in the year 1997 by the university to mainly cater to the needs of the automotive industry with regard to design, research, consultancy, training and testing. The centre functions currently both at the main campus and at the MIT campus. The centre proposes to offer a postgraduate programme titled, automotive manufacturing management, which will be a highly inter-disciplinary programme mainly framed with inputs from the user industry, namely the automotive industry. The centre acts as a nodal agency interacting with automotive industries both at home and abroad and with various academic departments and centres, functioning at Anna University.
Student organisations
The institute has active National Service Scheme, National Sports Organisation (India) and Youth Red Cross chapters on its campus. Each chapter is headed by a faculty member. Membership and active participation in one of these organisations are mandatory for first-year undergraduate students. There are also various events conducted in a regular interval of time to ensure student participation and coordination.
Magazines
MIT has an annual magazine MITMAG.
The students of the college started a newsletter and a writers' club named "The MIT Quill" . Its website hosts a number of articles written by its contributors and publishes a yearly magazine named The Quill Digest.
The Personality Development Association publishes its magazine PERSOPLUS.
Youth Red Cross publishes its magazine Vircham related to Social Awareness.
The Computer Society publishes its magazine TECH TIMES.
The Department of Electronics Engineering releases its annual magazine Impetus.
The Department of Instrumentation publishes its technical magazine InsTrue.
The Department of Production Technology releases its yearly magazine PRO-MAG.
The Rotaract Club of MIT publishes its annual magazine RotoPlus which is a collection of art and writings from physically challenged and orphanage children.
Cultural
Apart from this Homefest, the cultural and sports festival of Madras Institute of Technology Hostels is held during the month of April every year. Homefest is organised by the student body of the Madras Institute of Technology hostels, The Hostel Committee. Final year hostel students take part in organizing this festival from scratch every year. It was organised by using the official fund and money from students.
Techfest and symposia
Apart from this, MIT presented "Asymptote 2009" along with its diamond jubilee celebrations. Asymptote-2009 was held on 9, 10 and 11 January 2009. Students from other colleges participated in the event. But this was not continued in the following years.
Every department has an intra-college and inter-college technical symposium.
MIT Alumni Association
The MIT Alumni Association (MITAA) was established in Chennai, in 1966. The first General Body meeting was held on 14 April 1966 and was headed by V Gopalan (I Batch, President), C J G Chandra (I Batch, VP), P S Subramanian (6th Batch, Secretary) and B Venkatraman (4th Batch, Treasurer). K Srinivasan was the dean of the institute at that time.
Similarly, even outside Chennai, there is MIT Alumni Associations which are very active (viz, at Bengaluru, at Singapore, at Dubai just to name few) in providing some sponsorships, justified and needed financial assistance and collaborate with parent MIT alumni chapter and MIT campus on the considerable regular basis. Each Alumni group have their own internal communications and are active with participations.
Notable Alumni
Dr. A. P. J. Abdul Kalam, 11th president of India (2002–2007), Principal Scientific Advisor to the Govt. of India(1999-2002), Director General of Defence Research and Development Organisation (1992-1999) & recipient of the Bharat Ratna (1997)
Dr. K. Sivan, Chairman of the Indian Space Research Organization & Secretary, Department of Space (2018–present); former Director (2015-2018) of the Vikram Sarabhai Space Centre
Sujatha, Tamil author, novelist & screenwriter; BEL Engineer (Design and Production - Electronic Voting Machine)
R. Aravamudan, founding engineer (1962), INCOSPAR (now ISRO); former associate director of Vikram Sarabhai Space Centre (1980s), former director of TERLS (1970s), Satish Dhawan Space Centre (1989) & U R Rao Satellite Centre (1994)
Dr. R. N. Agarwal, former Program Director, AGNI and former Director, ASL; recipient of the Padma Shri (1990) & Padma Bhushan (2000)
Prof. M. K. A. Hameed, former Member of Kerala Legislative Assembly (1967–1970); first Vice Chairman, Kerala State Planning Board (1967) & Chairman, Public Accounts Committee; first Principal, T. K. M. Engineering College, Kollam and Founder of Hycount Plastics & Chemicals
Susi Ganeshan, Director, Tamil film industry
T-series
MIT has the unusual practice of 'T-series'. This is a mentorship system and the word is said to have originated from the term Technocrat. Each new 'technocrat' of the institute is assigned to a mentor from the previous batch. He/she also inherits the complete set of 'T-seniors' from this mentor. Mentorship activities span a whole gamut of student activities starting from passing on books, counselling, monetary help (if needed), and placement preparation assistance (mock interviews, aptitude tests).
Each Technocrat will also be assigned to a 'T-number'. The 'T-number' is a combination of the student's batch number, his/her department code and his/her roll number. For example, consider a student who has joined Electronics and Instrumentation Department (Department code is 4). If he/she belongs to the batch of 2007–2011, he/she would be the 60th batch from Electronics and Instrumentation Department and hence his 'T-number' would be '604xxx', where xxx represents the person's roll number. As per the 'T-series', this technocrat's mentor would be the person with the 'T-number' '594xxx'. Thus, the T-series can be tracked back to even the first batch of students. The best part is that when alumni from earlier batches come for get-togethers years after graduating, they still make it a practice to at least know and interact with their respective 'T-juniors' from the current batches.
MIT Museum
A museum named MIT Museum has been set up within the campus. The museum consists of photographs of all important events involving MIT. It also houses several newspaper coverages, books written by MITians and a replica of a glider made by the students of the 1952 batch. The museum was inaugurated by Anna University vice-chancellor Dr P Mannar Jawahar on 18 March 2011.
Popular media
The Tamil-language feature film, Five Star (2002), featured in part, the academic life at MIT. It was partly shot in MIT campus (mainly Hanger-II and hostels) during a semester break, particularly for a song where many available students participated. The Director of the film, Susi Ganeshan is an MIT alumnus.
References
Colleges affiliated to Anna University
University of Madras
Engineering colleges in Chennai
Academic institutions formerly affiliated with the University of Madras
Academic institutions formerly affiliated with Anna University |
18394223 | https://en.wikipedia.org/wiki/University%20College%20of%20Engineering%2C%20Thodupuzha | University College of Engineering, Thodupuzha | University College of Engineering (UCE), located near Thodupuzha, is an institute of engineering and technology, run and managed by the Centre for Professional and Advanced Studies (CPAS), established by the Government of Kerala. UCE offers both bachelor's and master's degree courses. The institute started functioning in 1996 at Thodupuzha and was later moved to its own campus at Muttom in May 2002.
History
University College of Engineering was inaugurated on 29 October 1996 by the then Kerala Minister for Education and Work, Shri.P. J. Joseph, in Thodupuzha. The college started functioning with three departments - Computer Science, Electronics and Communication and Polymer Technology. In May 2002, when the college was shifted to its own campus in Muttom, with the addition of two more departments - Information Technology and Electrical and Electronics. Later in 2007, MTech course in Applied Electronics started to be offered in the campus.
Campus
The college campus is located in a 25.6 -acre land near Muttom, 8 km from Thodupuzha, 52 km from Kottayam Railway Station, 60 km from Alwaye Railway Station, 59 km from Cochin International Airport, by the side of State Highway No 33.
Departments
Computer Science and Engineering
This course teaches hardware and software fields, with prominence on the software sector. Other than the final semester main project, the syllabus includes seminars and two design studio lab in the sixth and seventh Semester. The course aims at making its students capable to face the challenges in the fast-growing information technology industry. The core of the course includes subjects like Data structures, Operating systems, Language processors, Algorithm analysis and Design, Computer Architecture, Computer H/W and Peripherals, Programming Languages and Theory of computation.
Electronics and Communication Engineering
The Department of Electronics and Communication Engineering maintains an industry-institution interaction. The curriculum includes electives like Computer Networking, Information Theory and Coding.
Polymer Engineering
The BTech Polymer Engineering course was initiated for the requirements of the industrial sector giving equal emphasis for the study of rubber and plastic. The students are professionally trained through in-plant training in various industries in and out of the state, thereby nourishing themselves with the skill and expertise..
Information Technology
IT Department was started in 2002 as part of the college's expansion program.
Electrical and Electronics Engineering
Electrical and Electronics Engineering (EEE) Department was started in 2002 as part of the college's expansion programme.
References
External links
Engineering colleges in Kerala
Colleges affiliated to Mahatma Gandhi University, Kerala
Universities and colleges in Idukki district
Educational institutions established in 1996
1996 establishments in Kerala |
1490302 | https://en.wikipedia.org/wiki/Persistent%20uniform%20resource%20locator | Persistent uniform resource locator | A persistent uniform resource locator (PURL) is a uniform resource locator (URL) (i.e., location-based uniform resource identifier or URI) that is used to redirect to the location of the requested web resource. PURLs redirect HTTP clients using HTTP status codes.
Originally, PURLs were recognizable for being hosted at purl.org or other hostnames containing purl. Early on many of those other hosts used descendants of the original OCLC PURL system software. Eventually, however, the PURL concept came to be generic and was used to designate any redirection service (named PURL resolver) that:
has a "root URL" as the resolver reference (e.g. http://myPurlResolver.example);
provides means, to its user-community, to include new names in the root URL (e.g. http://myPurlResolver.example/name22);
provides means to associate each name with its URL (to be redirected), and to update this redirection-URL;
ensure the persistence (e.g. by contract) of the root URL and the PURL resolver services.
PURLs are used to curate the URL resolution process, thus solving the problem of transitory URIs in location-based URI schemes like HTTP. Technically the string resolution on PURL is like SEF URL resolution.
The remainder of this article is about the OCLC's PURL system, proposed and implemented by OCLC (the Online Computer Library Center).
History
The PURL concept was developed by Stuart Weibel and Erik Jul at OCLC in 1995. The PURL system was implemented using a forked pre-1.0 release of Apache HTTP Server. The software was modernized and extended in 2007 by Zepheira under contract to OCLC and the official website moved to http://purlz.org (the 'Z' came from the Zepheira name and was used to differentiate the PURL open-source software site from the PURL resolver operated by OCLC).
PURL version numbers may be considered confusing. OCLC released versions 1 and 2 of the Apache-based source tree, initially in 1999 under the OCLC Research Public License 1.0 License and later under the OCLC Research Public License 2.0 License (http://opensource.org/licenses/oclc2). Zepheira released PURLz 1.0 in 2007 under the Apache License, Version 2.0. PURLz 2.0 was released in Beta testing in 2010 but the release was never finalized. The Callimachus Project implemented PURLs as of its 1.0 release in 2012.
The oldest PURL HTTP resolver was operated by OCLC from 1995 to September 2016 and was reached as purl.oclc.org as well as purl.org, purl.net, and purl.com.
Other notable PURL resolvers include the US Government Printing Office (http://purl.fdlp.gov), which is operated for the Federal Depository Library Program and has been in operation since 1997.
The PURL concept is used in the w3id.org, that may replace the old PURL-services and PURL-technologies.
On 27 September 2016 OCLC announced a cooperation with Internet Archive resulting in the transfer of the resolver service and its administration interface to Internet Archive. The service is supported on newly created software, separate from all previous implementations. The transfer re-enabled the ability to manage PURL definitions that had been disabled in the OCLC-hosted service for several months. The service hosted on Internet Archive servers supports access via purl.org, purl.net, purl.info, and purl.com. OCLC now redirects DNS requests for purl.oclc.org to purl.org.
Principles of operation
The PURL concept allows for generalized URL curation of HTTP URIs on the World Wide Web. PURLs allow third party control over both URL resolution and resource metadata provision.
A URL is simply an address of a resource on the World Wide Web. A Persistent URL is an address on the World Wide Web that causes a redirection to another Web resource. If a Web resource changes location (and hence URL), a PURL pointing to it can be updated. A user of a PURL always uses the same Web address, even though the resource in question may have moved. PURLs may be used by publishers to manage their own information space or by Web users to manage theirs; a PURL service is independent of the publisher of information. PURL services thus allow the management of hyperlink integrity. Hyperlink integrity is a design trade-off of the World Wide Web, but may be partially restored by allowing resource users or third parties to influence where and how a URL resolves.
A simple PURL works by responding to an HTTP GET request by returning a response of type 302 (equivalent to the HTTP status code 302, meaning "Found"). The response contains an HTTP "Location" header, the value of which is a URL that the client should subsequently retrieve via a new HTTP GET request.
PURLs implement one form of persistent identifier for virtual resources. Other persistent identifier schemes include Digital Object Identifiers (DOIs), Life Sciences Identifiers (LSIDs) and INFO URIs. All persistent identification schemes provide unique identifiers for (possibly changing) virtual resources, but not all schemes provide curation opportunities. Curation of virtual resources has been defined as, "the active involvement of information professionals in the management, including the preservation, of digital data for future use."
PURLs have been criticized for their need to resolve a URL, thus tying a PURL to a network location. Network locations have several vulnerabilities, such as Domain Name System registrations and host dependencies. A failure to resolve a PURL could lead to an ambiguous state: It would not be clear whether the PURL failed to resolve because a network failure prevented it or because it did not exist.
PURLs are themselves valid URLs, so their components must map to the URL specification. The scheme part tells a computer program, such as a Web browser, which protocol to use when resolving the address. The scheme used for PURLs is generally HTTP. The host part tells which PURL server to connect to. The next part, the PURL domain, is analogous to a resource path in a URL. The domain is a hierarchical information space that separates PURLs and allows for PURLs to have different maintainers. One or more designated maintainers may administer each PURL domain. Finally, the PURL name is the name of the PURL itself. The domain and name together constitute the PURL's "id".
Comparing with permalink
Both permalink and PURL are used as permanent/persistent URL and redirect to the location of the requested web resource. Roughly speaking, they are the same. Their differences are about domain name and time scale:
A permalink usually does not change the URL's domain, and is designed to persist over years.
A PURL domain name is independently changeable, and is designed to persist over decades.
Types
The most common types of PURLs are named to coincide with the HTTP response code that they return. Not all HTTP response codes have equivalent PURL types and not all PURL servers implement all PURL types. Some HTTP response codes (e.g. 401, Unauthorized) have clear meanings in the context of an HTTP conversation but do not apply to the process of HTTP redirection. Three additional types of PURLs ("chain", "partial' and "clone") are given mnemonic names related to their functions.
Most PURLs are so-called "simple PURLs", which provide a redirection to the desired resource. The HTTP status code, and hence of the PURL type, of a simple PURL is 302. The intent of a 302 PURL is to inform the Web client and end user that the PURL should always be used to address the requested resource, not the final URI resolved. This is to allow continued resolution of the resource if the PURL changes. Some operators prefer to use PURLs of type 301 (indicating that the final URI should be addressed in future requests).
A PURL of type "chain" allows a PURL to redirect to another PURL in a manner identical to a 301 or 302 redirection, with the difference that a PURL server will handle the redirection internally for greater efficiency. This efficiency is useful when many redirections are possible; since some Web browsers will stop following redirections once a set limit is encountered (in an attempt to avoid loops).
A PURL of type "200" is an "Active PURL", in which the PURL actively participates in the creation or aggregation of the metadata returned. An Active PURL includes some arbitrary computation to produce its output. Active PURLs have been implemented in PURLz 2.0 and The Callimachus Project. They may be used to gather runtime status reports, perform distributed queries or any other type of data collection where a persistent identifier is desired. Active PURLs act similar to a stored procedure in relational databases.
A PURL of type "303" is used to direct a Web client to a resource that provides additional information regarding the resource they requested, without returning the resource itself. This subtlety is useful when the HTTP URI requested is used as an identifier for a physical or conceptual object that cannot be represented as an information resource. PURLs of type 303 are used most often to redirect to metadata in a serialization format of the Resource Description Framework (RDF) and have relevance for Semantic Web and linked data content. This use of the 303 HTTP status code is conformant with the http-range-14 finding of the Technical Architecture Group of the World Wide Web Consortium.
A PURL of type "307" informs a user that the resource temporarily resides at a different URL from the norm. PURLs of types 404 and 410 note that the requested resource could not be found and suggests some information for why that was so. Support for the HTTP 307 (Temporary Redirect), 404 (Not Found) and 410 (Gone) response codes are provided for completeness.
PURLs of types "404" and "410" are provided to assist administrators in marking PURLs that require repair. PURLs of these types allow for more efficient indications of resource identification failure when target resources have moved and a suitable replacement has not been identified.
PURLs of type "clone" are used solely during PURL administration as a convenient method of copying an existing PURL record into a new PURL.
Redirection of URL fragments
The PURL service includes a concept known as partial redirection. If a request does not match a PURL exactly, the requested URL is checked to determine if some contiguous front portion of the PURL string matches a registered PURL. If so, a redirection occurs with the remainder of the requested URL appended to the target URL. For example, consider a PURL with a URL of http//purl.org/some/path/ with a target URL of http://example.com/another/path/. An attempt to perform an HTTP GET operation on the URL http//purl.org/some/path/and/some/more/data would result in a partial redirection to http://example.com/another/path/and/some/more/data. The concept of partial redirection allows hierarchies of Web-based resources to be addressed via PURLs without each resource requiring its own PURL. One PURL is sufficient to serve as a top-level node for a hierarchy on a single target server. The new PURL service uses the type "partial" to denote a PURL that performs partial redirection.
Partial redirections at the level of a URL path do not violate common interpretations of the HTTP 1.1 specification. However, the handling of URL fragments across redirections has not been standardized and a consensus has not yet emerged. Fragment identifiers indicate a pointer to more specific information within a resource and are designated as following a # separator in URIs.
Partial redirection in the presence of a fragment identifier is problematic because two conflicting interpretations are possible. If a fragment is attached to a PURL of type "partial", should a PURL service assume that the fragment has meaning on the target URL or should it discard it in the presumption that a resource with a changed location may have also changed content, thus invalidating fragments defined earlier? Bos suggested that fragments should be retained and passed through to target URLs during HTTP redirections resulting in 300 (Multiple Choice), 301 (Moved Permanently), 302 (Found) or 303 (See Other) responses unless a designated target URL already includes a fragment identifier. If a fragment identifier is already present in a target URL, any fragment in the original URL should be abandoned. Unfortunately, Bos’ suggestion failed to navigate the IETF standards track and expired without further work. Dubost et al. resurrected Bos’ suggestions in a W3C Note (not a standard, but guidance in the absence of a standard). Makers of Web clients such as browsers have "generally" failed to follow Bos’ guidance.
Starting with PURLz 1.0 series, the PURL service implements partial redirections inclusive of fragment identifiers by writing fragments onto target URLs in an attempt to comply with and avoid problematic and inconsistent behavior by browser vendors.
See also
Implementation examples:
Archival Resource Key (ARK)
Digital Object Identifier (DOI)
Handle System identifiers
Link rot
OPAC
Permalink
URL redirection
URL shortening
Uniform Resource Name (URN)
Wayback Machine
References
External links
Official website for PURLz
Official website for The Callimachus Project
Internet Archive's PURL resolver
US Government Printing Office's PURL resolver
persistent-identifier.de
DPE/PURL Information and Resolver Site
URI schemes
Identifiers
1995 software
Free software programmed in Java (programming language)
Cross-platform free software
History of the Internet
Internet software for Linux
Unix Internet software |
28747555 | https://en.wikipedia.org/wiki/Quezon%20City%20University | Quezon City University | Quezon City University (QCU), formerly known as Quezon City Polytechnic University (QCPU), is a city government-funded university in Quezon City, Philippines. It was established on March 1, 1994 as the Quezon City Polytechnic offering technical and vocational courses. It was renamed as Quezon City Polytechnic University when it was elevated into university status in 2001. By virtue of City Ordinance No. SP-2812, series of 2019, also known as the Quezon City University Charter of 2019, QCPU was rechristened as the Quezon City University to qualify as a beneficiary of Republic Act 10931, also known as the free tuition law. The University was given recognition and became a full-pledged university in 2021.
History
In 1988, the Quezon City Council passed an ordinance to create a technical committee which conducted series of studies on the establishment of Quezon City Polytechnic University. The committee were composed of Quezon City officials to lead the said committee – QC Mayor as the Chairman, QC Vice Mayor as the Co-chairman, QC Chairman of the Committee on Education as the Vice Chairman; while the QC Treasurer, Director of the Bureau of Higher Education (Department of Education, Culture and Sports), a former University of the Philippines President, representatives from Pamantan ng Lungsod ng Maynila, Technological University of the Philippines and Polytechnic University of the Philippines, and four members of the City Council were part of the technical committee as members.
As the result of the studies and to kick-off the establishment of Quezon City's Local University, the Quezon City Polytechnic was created on March 1, 1994 by the virtue of the City Council Ordinance No. SP-0171 for the training and development of skilled and technical workers.
Three-year Associate Programs were introduced in the Polytechnic on Academic Year 1994-1995 which were designed to develop highly competent technicians for industry in the areas of Automotive Technology, Electrical Technology, Welding Technology, Refrigeration and Air-Conditioning Technology, and Fashion Technology. On the following Academic Year, the institution offered additional three-year Associate Programs for Electronics Technology, Mechanical Technology or Machine Shop, Computer Technology and an industry-led pilot course in Boiler Technology. The Polytechnic established its reputation among local government units as a show window and model technology-based institution paving the way for its recognition of the Technical Education and Skills Development Authority (TESDA) and developing a strong alliance with the Japan International Cooperation Agency (JICA).
In 1997, the SP-544, S-97 has been passed by the Quezon City Council which authorized the City Government to establish its own Higher Education Institution through the enhancement of existing Quezon City Polytechnic to Pamantansang Politekniko ng Lungsod Quezon or Quezon City Polytechnic University. After years of preparations and budget allocations, the Quezon City Council enacted City Ordinance No. SP-1030, S-2001, providing for the Charter and formal establishment of Pamantasang Politeknikal ng Lungsod Quezon or Quezon City Polytechnic University and strengthened its management. The University started offering Bachelor's Degree programs for Entrepreneurial Management, Industrial Engineering and Information Technology in the Academic Year of 2005-2006.
The Quezon City Council amended the University's Charter in 2009 through the enactment of City Ordinance, SP-1945, S-2009. This is to provide further fiscal and administrative autonomy to the University which helped the institution to optimized its academic initiatives and creativity. This ordinance allowed the University to enhance its Bachelor's Degree offerings; the curriculum of Information Technology and Entrepreneurship (previously known in the University as Entrepreneurial Management) programs have been improved in AY 2010-2011. On the following Academic Year, the Bachelor's Degree program for Electronics Engineering has been offered by the University.
To support the K-12 initiative of the national government, the University started temporarily offering Senior High School program (ABM, STEM and TechVoc Strands) through the enactment of City Ordinance SP-2308 in 2014. The QCPU Senior High School started accepting students from Academic Year 2016-2017 until Academic Year 2020-2021.
In Academic Year 2019-2020, the Quezon City Polytechnic University has been converted to Quezon City University to align with the Republic Act 10931, also known as the Universal Access to Quality Tertiary Education Act, which aims to provided free higher education in state universities and colleges. This has been enacted by the Quezon City Council through SP-2812 or known as Quezon City University Charter of 2019. In the same Academic Year, the University started offering Bachelor's Degree in Accountancy. QCU was formally recognized by the Commission on Higher Education in June 2021, through the awarding of Institutional Recognition (IR) to the University, which qualifies all its students to the free tuition law.
Campuses
San Bartolome (Main Campus)
The QCU Main Campus is located along Quirino Highway in Barangay San Bartolome, Novaliches. Its 4-hectare campus serves as the home of the Korea-Philippines Information Technology Training Center (KorPhil), whose advanced IT training facilities have also been made available to the University. The University also operates an Enterprise Development Center, from its main campus, designed to connect its Entrepreneurship program with the needs of small and medium-scale businesses in the city.
In June 2019, The Quezon City Government inaugurated a new 7-storey-building in QCU that has a 500-seater auditorium and 33 laboratories.
San Francisco
In 2006, the University opened its first satellite campus located inside the grounds of San Francisco High School in Barangay Sto. Cristo near SM City North EDSA. The Campus is connected to the Philippines' first interactive science center, the Quezon City Science Interactive Center.
Batasan Hills
QCU opened its second satellite campus on 2009 in Quezon City's Batasan Civic Center. The four-storey edifice is along IBP Road, Barangay Batasan Hills, beside Batasan Hills National High School.
Academics
Quezon City University has 8,693 students enrolled across all programs for the Academic Year 2019-20. At present, the University offers four to five-year Bachelor's Degree programs under the following colleges:
College of Accountancy and Business
Bachelor of Science in Accountancy
Bachelor of Science in Entrepreneurship
College of Computer Science and Information Technology
Bachelor of Science in Information Technology
Infrastructure Track
Management Track
Programming Track
College of Engineering
Bachelor of Science in Electronics Engineering
Bachelor of Science in Industrial Engineering
See also
Local Universities and Colleges (LUC)
Association of Local Colleges and Universities (ALCU)
Alculympics
References
External links
Educational institutions established in 1994
Universities and colleges in Quezon City
Local colleges and universities in Metro Manila
1994 establishments in the Philippines |
37531003 | https://en.wikipedia.org/wiki/The%20Sims%204 | The Sims 4 | The Sims 4 is a 2014 social simulation game developed by Maxis and published by Electronic Arts. It is the fourth major title in The Sims series, and is the sequel to The Sims 3 (2009). The Sims 4 was announced on May 6, 2013, and released in September 2014 for Microsoft Windows; versions for macOS, PlayStation 4, and Xbox One were subsequently released in 2015 and 2017.
The Sims 4 focuses on its improved character creation and housebuilding tools, as well as deeper in-game simulation with the new emotion and personality systems for Sims. The Sims 4 has received many paid downloadable content packs since its release. Eleven expansion packs, eleven "game packs", eighteen "stuff packs", and ten "kits" have been released. The most recent expansion pack is Cottage Living, released on July 22, 2021. Additionally, many free updates have been released throughout the game's lifespan that include major features and additions to the game, such as the addition of swimming pools, character customization options, and terrain tools.
The game received mixed reviews upon its release, with the majority of criticism directed towards its lack of content and missing features compared to previous entries in the series.
Gameplay
The Sims 4 is a strategic social simulation video game, similar to preceding titles in the series. There is no primary objective or goal to achieve, and instead of fulfilling objectives, the player is encouraged to make choices and engage fully in an interactive environment. The focus of the game is on the simulated lives of virtual people called "Sims", and the player is responsible for directing their actions, attending to their needs, and helping them attain their desires. Players control their life and explore different personalities which change the way the game plays out. "Simoleon" (§) is the unit of currency used in the game. Players can play with pre-existing Sims and families, or create their own in Create-a-Sim.
Sims primarily make money by getting a job, or selling crafted items such as paintings and garden produce. Sims need to develop skills for jobs and crafting items, for example, Sims in the Culinary career track need to be proficient in the Cooking skill. Players can place their Sims in pre-constructed homes or build and furnish houses in Build mode, then upload them onto an online exchange named the Gallery. Optional add-ons and expansion packs expand the number of features, tools, and objects available to play with.
Create-a-Sim
Create-a-Sim is the main interface for creating Sims in The Sims 4, and is significantly changed from previous The Sims games. Sliders for adjusting facial and bodily features are removed, and replaced by direct mouse manipulation. By clicking, dragging, and pulling with the mouse, players may directly manipulate the facial and bodily features of a Sim, such as body parts like the abdomen, chest, legs, arms and feet. Selections of pre-made designs of Sims are available to choose from, and range in body shape and ethnicity.
Sims have seven life stages: baby, toddler, child, teenager, young adult, adult, and elder. All life stages can be created in Create-a-Sim, with the exception of the baby life stage. Toddlers were absent from the original game release, but were added via a patch in 2017.
Each Sim has three personality traits, and an aspiration containing its own special trait. Traits form the personality of a Sim, and significantly affect a Sim's behavior and emotions. Aspirations are lifelong goals for Sims, Sims get a reward trait for completing an aspiration, which gives them an advantage in actions relevant to the aspiration. Sims can also have likes and dislikes that determine aesthetics and activities Sims prefer.
There are hairstyles, with several hair color options, for both male and female Sims. All clothing options are available across all outfits, and players are allowed up to five outfits per category. There is a filter panel where clothing options can be sorted by color, material, outfit category, fashion choice, style, content and packs.
Gender options for Sims were expanded in a 2016 patch, allowing for greater freedom of gender expression. With this update, hairstyles and outfits can be worn by any Sim of any gender, and pregnancy is now possible regardless of gender. The diversity of skin tones in the game was greatly expanded in a 2020 patch. With this update, the skin colors of Sims are set by dividing them into warm, neutral and cold tones, and the transparency and saturation of skin tones and makeup colors can be adjusted to the player's desired color.
Build mode
Build mode is the main interface for constructing and furnishing houses and lots in The Sims 4. Buy mode, the interface for buying house furnishings in previous games, is merged into Build mode. Some locked items may be unlocked through the progression of career levels or cheats.
The Sims 4 features a revamped, room-based Build mode. Entire buildings and rooms can be moved across a lot, including all objects, floor and wall coverings, doors and windows. There is also a search function to search for Build mode objects. There are pre-made rooms that can be placed instantly. Wall heights can be adjusted. Players can also place down fully furnished rooms in a variety of styles. Swimming pools and ponds can be constructed with their respective tools.
Walls can now have three different heights, to be set for each level of a building. Windows can be moved up or down vertically along a wall. Windows can also be automatically added to rooms, then adjusted by the player as needed. Columns automatically stretch or contract to match the height of the walls on a particular level, and can be added to railings. Foundations can be added to or removed from a building, and the height of the foundation can be adjusted. Half walls can be built, in many height options. L- and U-shaped stairs can be constructed, as well as ladders. Platforms can be constructed to adjust the height of a room's floor.
Gallery
The Gallery is an online exchange for player-generated content in The Sims 4. It allows players to share Sims, families, households, rooms and buildings with other players, and allows players to download creations by other players within the game. Unlike the Exchange networks in previous titles, the Gallery is fully integrated into The Sims 4, and content can be added to the Gallery directly from within the game, or downloaded and immediately used in-game. The Gallery is also accessible through The Sims 4 website, or the now-defunct mobile app. The Gallery was made available to the PlayStation 4 and Xbox One versions of The Sims 4 in 2020.
Worlds
In The Sims 4, a world is a collection of individual neighborhoods within a single playable map. Sims can visit different worlds without having to move houses, and Sims from inactive households may occasionally be seen visiting other worlds. Most lots can be visited directly from the map, and will have a unique lot assignment. Secret lots cannot be visited from the map, and can only be accessed through interacting with neighborhood amenities, or through active careers. Additional worlds are introduced to the game via patches, expansion packs and "game packs".
Unlike The Sims 3, The Sims 4 does not have an open world feature, and travelling between lots will require a loading screen. Neighborhoods, however, allow some open world functionality by allowing Sims to explore freely within the neighborhood's boundaries. Switching between worlds also brings up a loading screen; players can jump between worlds by going into map view or by using the phone to travel.
The Sims 4 comes with three worlds: Willow Creek – a New Orleans-inspired world, Oasis Springs – a Southwestern United States-inspired world, and Newcrest – a world containing blank lots free for the player to build upon. Additional worlds are included in expansion packs and game packs, with the added world usually being a core feature of the pack. For example, Island Living introduces a tropical island world named Sulani, Jungle Adventure introduces a Latin American-inspired vacation world named Selvadorada, Get Together introduces a European-inspired world named Windenburg, and Snowy Escape introduces a snow-capped mountainous Japanese-inspired world named Mt. Komorebi.
Emotions
Emotion is a gameplay mechanic introduced in The Sims 4. It builds upon the mood system of Sims in previous titles, but is more easily affected by in-game events and social interactions with other Sims. Positive and negative moodlets (a type of buff) influence the current emotion of a Sim. The current emotional state of a Sim is depicted in the lower left corner of the screen while playing.
There are several ranges of emotions. Sims may reach one stage of an emotion, and then progress to a second, more extreme stage of the same emotion. Some emotions may lead to a third stage, which is an extreme emotional stage that can lead to emotional death, if the extreme emotion is not resolved.
Development
Development of The Sims 4 began as an online multiplayer title, under the working title of "Olympus". It was planned to incorporate online multiplayer gameplay, as part of publisher Electronic Arts's (EA) plan to release more online multiplayer titles. EA labels president Frank Gibeau stated in 2012, “I have not green-lit one game to be developed as a single player experience. Today, all of our games include online applications and digital services that make them live 24/7/365.”
However, these plans were changed after the negative launch reception of SimCity in March 2013, also developed by Maxis, which was plagued with widespread technical and gameplay problems related to the game's mandatory network connection. As a result, development for The Sims 4 pivoted back to a single-player title. The switch in format occurred relatively late in development – an alternate version of the promotional site leaked in August 2013 indicated that online functions were still present, and remnants of this planned online functionality remain in the game's files. An internet connection is only required during the initial installation process, for game activation with an Origin account.
British neoclassical composer Ilan Eshkeri composed the game's orchestral soundtrack, which was recorded at Abbey Road Studios and performed by the London Metropolitan Orchestra.
The Windows and macOS versions of The Sims 4 have extensive mod support, as per tradition in the main The Sims series. There are two types of mods: script mods and custom content. Script mods are written in Python, typically modifying or adding gameplay mechanics. There is also a large variety of custom content available, such as custom hairstyles, clothing, skin tones and furniture. The game was upgraded from Python 3.3.5 to 3.7.0 in a 2018 patch, which broke compatibility with existing script mods. All incompatible script mods have to be updated for the patch.
Marketing and release
In May 2013, Electronic Arts confirmed that The Sims 4 was in development, and would be scheduled for release in 2014. The Sims 4 was officially unveiled via gameplay demos and release trailers in August 2013 at Gamescom.
On May 14, 2014, producer Ryan Vaughan unveiled another Create-a-Sim trailer on YouTube. This included a preview of what pre-made Sims would look like in The Sims 4. The development team unveiled another trailer on May 28, 2014, that showcased the new Build mode features. Additional game footage and the release date were revealed at the Electronic Entertainment Expo (E3) on June 9, 2014, and the game's North American release date of September 2, 2014 was announced. The Sims 4 was later released on September 4, 2014, in the European Union, Australia and Brazil.
A free playable demo of the Create-a-Sim feature was made available for download on Aug 12, 2014.
EA announced a collaboration in 2019 with Italian designer Moschino. The collaboration includes a collection of pixel art clothing inspired by the franchise, and a Moschino-themed stuff pack. A reality competition TV series, The Sims Spark'd, premiered on TBS on July 17, 2020, featuring twelve contestants from popular YouTube channels in The Sims community. Contestants are tasked with challenges within The Sims 4 to create characters and stories following the challenge's themes and limitations.
The "Sims Sessions" music festival was a limited-time event hosted from June 29 to July 7, 2021, which was accessible within a special area in the game world. Singers Bebe Rexha, Glass Animals frontman Dave Bayley, and Joy Oladokun recorded Simlish versions of their songs "Sabotage", "Heat Waves", and "Breathe Again", respectively, for their in-game performances during the event.
Reception
The Sims 4 received mixed reviews from critics upon release. Review aggregator site Metacritic gave The Sims 4 a score of 70% based on 74 reviews, indicating "mixed or average reviews". The PlayStation 4 and Xbox One versions, released in 2017, both received a score of 66%.
Many reviewers criticized The Sims 4s lack of features and missing content compared to previous titles, particularly The Sims 3s Create-a-Style and open world features. Frequent loading screens and glitches were also mentioned. Lee Cooper of Hardcore Gamer described The Sims 4 as a "half-hearted experience wrapped in a neat and pretty package". Jim Sterling of The Escapist noted an "overall lack of engagement", and declared that "The Sims 4 is basically The Sims 3, but shrunken and sterile". Kevin VanOrd of GameSpot stated, "The Sims 4s biggest problem is that The Sims 3 exists". Kallie Plagge of IGN was disappointed by the lack of "cool objects" in place of missing content, concluding, "it's a good start to what may eventually be expanded into a great Sims game, but it's not there yet". Nick Tan of Game Revolution describes the game as a "case study for loss aversion", noting frustration among Sims fans due to the missing features and content, concluding that the game is "woefully incomplete, despite being unexpectedly solid and entertaining in its current state."
On the positive end, reviewers praised the game's new emotion and multitasking systems, visual design, revamped Build mode, and the Gallery feature. Plagge of IGN stated that she did not need micromanage Sims’ interactions with the multitasking system, and described the Build mode improvements as "universally exciting". Cooper of Hardcore Gamer described the new Create-a-Sim as a "veritable hodgepodge of options", despite the lack of Create-a-Style. VanOrd of GameSpot praised the visual and audio design, and the combination of the emotion and multitasking systems as a "sheer delight". Tan of Game Revolution noted the "unbelievable" quality of the character animations, intuitive Create-a-Sim, and stated the Gallery feature is "swift and uncomplicated". Chris Thursten of PC Gamer highlighted the new graphical style, better animations, and stated that the emotion system "changes the feel and flow of the game". Alexander Sliwinski from Joystiq praised the new search function in Build mode.
Sales and revenue
The Sims 4 has 36 million players worldwide across all platforms as of 2021, and has generated over $1 billion of total revenue as of 2019. NetBet released a survey in 2021 on video game revenue data, estimating that The Sims 4 brings in $462 million annually.
At release, The Sims 4 was the first PC game to top the all-format video game charts in two years. In 2018, EA reported that the game had 10 million players. In 2020, the game had a total of 20 million players, and 10 million monthly active users.
Controversies
Missing features
Early in the development process, it was revealed that Create-a-Style, a color customization feature introduced in The Sims 3, would not be added to The Sims 4, in lieu of other features. Maxis announced through a series of tweets that the game would ship with a "stripped-down" version of story-progression (a gameplay mechanic controlling neighborhood autonomy), and that basements, grocery stores, schools and work locations would not be featured in the game.
These announcements sparked criticism among players, who speculated that the exclusion of arguably core features were intended to be left out for paid downloadable content, or in order to make rushed deadlines. Maxis contended that it was "not possible for us to include every single feature and piece of content we added to The Sims 3 over the last five years", but clarified that they may be able to add these features back in at some point in the future. Producer Graham Nardone attributed the sacrifice of "standard" gameplay features to time constraints, complexity and distribution of developers, the comparative lack of available developers to some areas of production to other areas, as well as risk factors.
Producer Rachel Franklin later acknowledged the concerns of players in an official blog post, explaining that Maxis was focused on The Sims 4s new core game engine technologies, and that the sacrifices the team had to make were a "hard pill to swallow". Franklin stated that the team was focused "delivering on the vision set out for The Sims 4", and were focused on the new features such as Sim emotions, as well as the improved Create-a-Sim and Build/Buy mode. Hence, the new features detracted focus from other features such as adding swimming pools and the toddler Sim life stage, which have since been added to the game via free patches.
Post-release content and updates
In response to players' complaints about missing features, Maxis pledged to introduce additional content to The Sims 4 via free patches. Patches for the game usually include bug fixes and added content. Notable features added via patches include pools, toddler Sim life stage, the Newcrest world, gender customization in Create-a-Sim, hot tubs and the calendar system.
Maxis began releasing paid downloadable content packs for the game from 2015 onwards. Expansion packs focus on major new features, with many objects, clothes, styles, worlds and life states are geared towards the pack's major theme. "Game packs" also add new features, objects, clothes, styles, worlds and life states, similar to expansion packs, albeit at a smaller scale. "Stuff packs" are minor content packs which usually only add objects and minor gameplay features. "Kits" are the smallest content packs, with each kit exclusively focusing on adding Build mode items, Create-a-Sim items, or gameplay additions.
A 64-bit version of The Sims 4 was patched into the game in 2015. The 32-bit version of the game remained included alongside the 64-bit version, until the introduction of the Legacy Edition in 2019. The Legacy Edition of The Sims 4 is maintained for backward compatibility purposes, for Windows PCs unable to run the 64-bit version, and Macs that do not support the Metal graphics API. It does not support content packs released after Realm of Magic, and does not receive any further content updates.
Maxis revealed a rebrand to The Sims 4 in July 2019, introducing redesigned logos, branding colors, and a new game interface. All box arts for the base game, expansion packs, game packs and stuff packs were redesigned as well. In 2021, EA affirmed their commitment to long-term support for The Sims 4, for "ten years, fifteen years, or more", citing a "shift across the entire games industry to support and nurture our communities long-term".
Star Wars: Journey to Batuu
Star Wars: Journey to Batuu, the ninth game pack for The Sims 4, was announced on August 27, 2020. The game pack announcement was received negatively by players, who felt it de-prioritized features that were still missing from the game, and indicated this pack release was part of Maxis' roadmap for the game in 2020, and that Maxis had worked on it for months. Others presumed that it was a contractual obligation given EA's ownership of the Star Wars video game franchise.
An independent poll, hosted before the release of the pack, asked players what themes they would like to see in future The Sims 4 content packs; Star Wars came in last out of twenty-one possible choices. Executive producer Lyndsay Pearson addressed the criticism in a series of tweets on September 1, 2020, taking responsibility for approving the Star Wars pack, but clarified that Maxis simultaneously works on and listens to feedback for other themes for the game.
Variety of skin tones
Since its release, The Sims 4 had been receiving complaints from players about the lack of realistic skin tones for Sims, particularly for darker-skinned Sims. Amira Virgil, better known as Xmiramira, who develops custom content for the game for darker-skinned Sims, had been critical of the game's lack of diversity options in Create-a-Sim. Following her appearance on the reality TV series The Sims Spark'd, her mods for additional dark skin tones for Sims received increased media attention. Following this, Pearson posted a video on Twitter in August 2020, stating that the development team would fix the visual artifacts of the current skin tones and bring new skin tones into the game.
Pearson further reiterated these sentiments in a blog post on The Sims official website in September 2020, stating that the development team was committed to expanding representation in the game. Updates to skin tones for Sims were introduced in a December 2020 patch, which include adjustable sliders that modify the brightness of the skin colors, as well as significantly more skin tone presets. Maxis consulted custom content creators, including Xmiramira, during the process of creating the update. Sliders for makeup were also introduced, so makeup on Sims would better match the new skin tones.
Expansion packs
Game packs
See also
The Sims Mobile
The Sims Spark'd
List of The Sims video games
Notes
References
External links
2014 video games
Electronic Arts games
Life simulation games
MacOS games
PlayStation 4 games
Python (programming language)-scripted video games
Single-player video games
Social simulation video games
The Sims
Transgender-related video games
Video games featuring protagonists of selectable gender
Video games with downloadable content
Video games with custom soundtrack support
Windows games
Xbox Cloud Gaming games
Xbox One games
Video games scored by Ilan Eshkeri
Video games with expansion packs
Video games developed in the United States
Video games about ghosts |
8435789 | https://en.wikipedia.org/wiki/Computer%20Aid%20International | Computer Aid International | Computer Aid International is a not-for-profit organisation active in the field of Information and Communication Technologies for Development. A registered charity, Computer Aid was founded in 1996 to bridge the digital divide by providing refurbished PCs from the UK to educational and non-profit organisations in developing countries.
Computer Aid has provided over 260,000 refurbished computers to educational institutions and not-for-profit organisations in more than 110 different countries to date.
Computer Aid shipped its 100,000th computer in February 2008, sending PCs to more than 100 countries.
Organization
Computer Aid International is a non-governmental organisation registered with the Charity Commission of England & Wales (registration number: 1069256) and is a not-for-profit social business with the registration number 3442679 Companies House.
Computer Aid has offices in London, South Africa and Kenya. At the Africa HQ in Nairobi Computer Aid has dedicated project managers who work with educational institutions and local non-profit organisations throughout Africa supporting the application of ICT for Development.
Computer Aid has a board of trustees that meet bi-monthly to provide strategic direction and fiduciary oversight.
Professor Denis Goldberg is Computer Aid's Honorary Patron.
Strategy
Computer Aid offers a decommissioning service to UK companies, government departments and universities that are upgrading their computer systems - donated PCs are data-wiped, refurbished and tested. Non profit organisations in the developing world can apply for refurbished computers for educational projects. They also run their own projects, such as eClasses where a computer lab is set up from scratch and teachers are trained in ICT. They also created the Zubabox - a solar powered portable ICT lab which can be deployed in areas without internet or electricity. They have also created the Connect device, a Raspberry Pi based device which can create a local network, allowing a classroom to access 64Gb of educational material without the need for internet.
UK IT Donors
Computer Aid has partnered with Tier 1 to offer a secure service to UK companies and organisations replacing their hardware. The charity provides end-of life IT asset management services, which include data removal, computer refurbishment, reuse, and recycling. Donating IT equipment to Computer Aid is in compliance with UK legislation, including the WEEE Directive, Data Protection Act and Environment Act. Computer Aid donors include Dfid, Sainsbury's, Coca-Cola, Diageo, Orange, Virgin, Betfair, PepsiCo, Investec, WWF, Christian Aid, BBC Worldwide and Ofcom.
Projects
Computer Aid supports a telemedicine project in partnership with African Medical and Research Foundation (AMREF). This project has equipped over 40 rural hospitals in Kenya, Tanzania and Uganda with digital cameras, computers, printers and scanners and provides training and technical support. The project enables doctors and nurses in remote rural areas
to access specialist clinical support diagnosis improving healthcare in rural communities.
Computer Aid, in partnership with Sightsavers International, has provided PCs installed with [adaptive technologies] for the blind and partially sighted in more than 20 different countries.
In Cameroon, Computer Aid is working with several not-for-profit organisations to provide PCs in secondary schools and community-based organizations. These include organizations like the British Council and Education Information Services International (EISERVI).
In Rwanda, Computer Aid has worked with the Kigali Institute of Education, The Ministry of Health and the Rwanda Information Technology Authority (RITA) in providing PCs to schools, health centers and tele centers country wide.
In Burundi, Computer Aid is working with La Fondation Buntu to provide PCs to widows and orphans who were victims of the war. Computer Aid PCs are currently being used in various secondary schools in Burundi, both in Bujumbura and in the provinces.
In Zambia, Computer Aid has sent PCs to secondary schools through national distribution programmes supported by the national government and local NGOs. They are running a series of eClasses in primary and secondary schools between 2017 and 2018
In Zimbabwe, Computer Aid has sent PCs to universities, tertiary institutions and the national consortium of libraries. Computer Aid is also working to establish relationships with Non Governmental Organizations (NGOs) working to promote development in Zimbabwe. They also helped create the University of Zimbabwe's first female-only ICT lab to help bring equal access to both genders after it was found social issues were preventing the current labs from being used equally.
In Malawi, Computer Aid enjoys a strong partnership with the Council for Non Governmental Organizations in Malawi (CONGOMA)
and has partnered with CONGOMA to send over 5,000 PCs to various NGOs, schools and universities in Malawi.
Computer Aid has also sent PCs to Eswatini, South Africa, Namibia, Lesotho, and Botswana.
In Eritrea Computer Aid is working with the British Council to provide PCs in public and school libraries.
Ethiopia as a land-locked country has also not been left behind. Computer Aid works with not-for-profit organisations, such as [Information Technology Development Association (ITDA)], Ethiopia Knowledge and Technology Transfer Society (EKTTS), Christian Relief Development Association (CRDA). Over 6,000 PCs have been provided to Ethiopian schools, tertiary institutions and other not-for-profit organisations.
In Liberia, Computer Aid is working with Stella Maris Polytechnic to provide computers to institutions of higher learning and NGOs.
Computer Aid not only provide computers to organisations in Africa but also in Asia, Eastern Europe, the Middle East and Latin America. To give a few examples; in Colombia Computer Aid works with the International Organisation of Migration (IOM) to help internally displaced children receive an education. In Ecuador several hundred computers were donated to Fair Trade Banana Producers to improve the day-to-day running of the fair trade banana enterprise and its trade unions. In Venezuela Computer Aid provided PCs to an indigenous wind powered school in the middle of the Amazonian jungle. Power supply problems in rural areas of developing countries make it sensible to use the most power-efficient options. Computer Aid has asked ZDNET to survey the available choices for low-power computing. The initial survey has been
completed and field testing will now be carried out in three countries in Africa.
Computer Aid International has also developed a portable wind-powered cyber café. It can be shipped as a complete sea container and contains a fully functional cyber café, comprising a thin client network of eight monitors running off a standard P4 acting as a server. Wind panels are fitted to power the container and a thin client network was adopted because wind panels are prohibitively expensive if using standard desktops. The idea was developed in conjunction with Computer Aid partners in Zambia.
See also
Computer technology for developing areas
Geekcorps
Geeks Without Bounds
NetCorps
Random Hacks of Kindness
United Nations Information Technology Service (UNITeS)
References
Further reading
External links
Article on a Computer Aid project supporting blind and visually impaired children in Kenya
Article on a Computer Aid project helping farmers to maximise crop harvests in times of drought
Article on ZDNet low power computing survey for Computer Aid
Organizations established in 1997
Development charities based in the United Kingdom
Information technology charities
Charities based in London |
2214825 | https://en.wikipedia.org/wiki/Mario%20Artist | Mario Artist | is an interoperable suite of three games and one Internet application for Nintendo 64: Paint Studio, Talent Studio, Polygon Studio, and Communication Kit. These flagship disks for the 64DD peripheral were developed to turn the game console into an Internet multimedia workstation. A bundle of the 64DD unit, software disks, hardware accessories, and the Randnet online service subscription package was released in Japan starting in December 1999.
Development was managed by Nintendo EAD and Nintendo of America, in conjunction with two other independent development companies: Polygon Studio was developed by the professional 3D graphics software developer, Nichimen Graphics; and Paint Studio was developed by Software Creations of the UK.
Titled Mario Paint 64 in development, Paint Studio was conceived as the sequel to Mario Paint (1992) for the Super NES. IGN called Talent Studio the 64DD's "killer app".
Suite
Paint Studio
Mario Artist: Paint Studio, released on December 11, 1999, is a Mario-themed paint program. The user has a variety of brush sizes, textures, and stamps, with which to paint, draw, spray, sketch, and animate. The stock Nintendo-themed graphics include all 151 Red- and Blue-era Pokémon, Banjo-Kazooie, and Diddy Kong Racing characters. Previously titled Mario Paint 64 in development, Paint Studio has been described as the "direct follow-up" and "spiritual successor" to the SNES's Mario Paint, and as akin to an Adobe Photoshop for kids.
On June 1, 1995, Nintendo of America commissioned the independent UK game studio Software Creations, soliciting a single design concept for "a sequel to Mario Paint in 3D for the N64". John Pickford initially pitched a 3D "living playground", where the user edits the attributes of premade models such as dinosaursplaying with sizes, behaviors, aggression, speed, and texture design. The project's working title was Creator, then Mario Paint 64, then Picture Maker as demonstrated at Nintendo's Space World 1997 trade show in November 1997, and then Mario Artist & Camera. Software Creations reflected on political infighting between Nintendo's two sites: "eventually the Japanese took control and rejected many of the ideas which had been accepted enthusiastically by the Americans, steering the project in a different direction after John left Software Creations to form Zed Two, and throwing away loads of work." The audio functionality was split out into Sound Studio, also known as Sound Maker at Nintendo Space World 1997, where it was mentioned but not shown. By 2000, development reportedly included music producer Tetsuya Komuro. It was canceled.
Published as a bundle with the Nintendo 64 Mouse, it is one of the two 64DD launch games on December 11, 1999 along with Doshin the Giant. Utilizing the Nintendo 64 Capture Cassette cartridge (released later in a bundle with Talent Studio), the user can import images and movies from any NTSC video source such as video tape or a video camera. The Japanese version of the Game Boy Camera can import grayscale photographs via the Transfer Pak. The studio features a unique four player drawing mode. Minigames include a fly swatting game reminiscent of that in Mario Paint, and a game reminiscent of Pokémon Snap.
Talent Studio
Mario Artist: Talent Studio, released on February 23, 2000, is bundled with the Nintendo 64 Capture Cartridge. Its working title was Talent Maker as demonstrated at Nintendo's Space World 1997 trade show in November 1997. It was described by designer Shigeru Miyamoto as "a newly reborn Mario Paint" upon a brief demonstration at the Game Developers Conference in March 1999 as his example of a fresh game concept.
The game presents the player's character design as being a self-made television stage talent or celebrity. It is a simple animation production studio which lets the user insert captured images such as human faces onto 3D models which had been made with Polygon Studio, dress up the models from an assortment of hundreds of clothes and accessories, and then animate the models with sound, music, and special effects. The player can connect an analog video source such as a VCR or camcorder to the Capture Cartridge and record movies on the Nintendo 64. A photograph of a person's face from a video source via the Capture Cassette or from the Game Boy Camera via the Transfer Pak, may be mapped onto the characters created in Polygon Studio and placed into movies created with Talent Studio.
IGN describes Talent Studio as the 64DD's "killer app" with a graphical interface that's "so easy to use that anyone can figure it out after a few minutes", letting the user create "fashion shows, karate demonstrations, characters waiting outside a bathroom stall, and more" which feature the user's own face. Nintendo designer Yamashita Takayuki attributes his work on Talent Studio as having been foundational to his eventual work on the Mii.
According to Shigeru Miyamoto, Talent Studios direct descendant is a GameCube prototype called Stage Debut, using the Game Boy Advance's GameEye camera peripheral and linking to the GameCube via a cable, to map self-portraits of players onto their character models. It was publicly demonstrated with models of Miyamoto and eventual Nintendo president Satoru Iwata. Never having been released, its character design features became the Mii, the Mii Channel, and features of games such as Wii Tennis.
Communication Kit
Mario Artist: Communication Kit, released on June 29, 2000, is a utility application which allowed users to connect to the Net Studio of the now-defunct Randnet dialup service and online community for 64DD users. In Net Studio, it was possible to share creations made with Paint Studio, Talent Studio, or Polygon Studio, with other Randnet members. Other features included contests, and printing services available by online mail order for making custom 3D papercraft and postcards. The Randnet network service was launched and discontinued alongside the 64DD, running from December 1, 1999 to February 28, 2001.
The disk has content that may be unlocked and used in Paint Studio.
Polygon Studio
Mario Artist: Polygon Studio, released on August 29, 2000, is a 3D computer graphics editor that lets the user design and render 3D polygon images with a simple level of detail. It has been described as a consumer version of developer Nichimen Graphics' developer tool N-World. It was originally announced as Polygon Maker at Nintendo's Space World '96 trade show, demonstrated at Space World 1997 in November 1997, and renamed to Polygon Studio at Space World '99. The game was scheduled to be the final title in the original Starter Kit's mail order delivery of 64DD games, but it didn't arrive on time, leading IGN to assume it was canceled until it was later released. The Expansion Pak and the Nintendo 64 Mouse are supported peripherals.
The idea of minigames was popularized generally during the Nintendo 64's fifth generation of video game consoles, and some early minigames appear in Polygon Studio in the style that would later be used in the WarioWare series of games. Certain minigames originated in Polygon Studio, as explained by Goro Abe of Nintendo R&D1's so-called Wario Ware All-Star Team:
The art form of papercraft was implemented by modeling the characters in Polygon Studio and then utilizing Communication Kit to upload the data to Randnet's online printing service. The user finally cuts, folds, and pastes the resulting colored paper into a physically 3D figure.
Unreleased
Mario Artist: Game Maker
Mario Artist: Graphical Message Maker
Mario Artist: Sound Maker
Mario Artist: Video Jockey Maker
Reception
Nintendo World Report described the Mario Artist series as a "spiritual successor to Mario Paint". IGN collectively describes the Mario Artist suite as a layperson's analog to professional quality graphics development software. They state that the combination of the 64DD's mass writability and the Nintendo 64's 3D graphics allows Nintendo to "leave CD systems behind", by offering "something that couldn't be done on any other gaming console on the market" to people "who want to unleash their creative talents and perhaps learn a little bit about graphics design on the side". The designer of Paint Studio, Software Creations, roughly estimates that 7,500 copies of that game may have been sold.
IGN rated Paint Studio at 7.0 ("Good") out of 10. Peer Schneider described it as a powerful, affordable, and easy-to-use 2D and 3D content creation tool unmatched by other video game consoles, although minimally comparable to personal computer applications. He likens it to an edutainment version of Adobe Photoshop for children, and a good neophyte introduction to the Internet. He considers Paint Studio to embody Nintendo's originally highly ambitious plans for 64DD, and to thus suffer greatly due to the cancellation of most Paint Studio-integrated disk games and the application's incompatibility with cartridge-based games.
Rating it at 8.2 ("Great") out of 10, IGN calls Talent Studio the 64DD's "killer app" with a graphical interface that's "so easy to use that anyone can figure it out after a few minutes", and featuring "breathtaking motion-captured animation".
Legacy
Polygon Studio contains some mini games, which served as an inspiration and appear in WarioWare games of future console generations.
Talent Studio gave rise to an unreleased GameCube prototype called Stage Debut, which in turn yielded character design features which became the Mii, the Mii Channel, and features of other games such as Wii Tennis.
In Super Mario Odyssey, Mario can buy and unlock a "Painter's Cap" in Luncheon Kingdom, this game based on his Mario Artist appearance.
See also
Mario Paint
Super Mario Maker
Super Mario Maker 2
Famicom BASIC
Notes
References
1999 video games
2000 video games
3D graphics software
64DD games
Cancelled 64DD games
Drawing video games
Japan-exclusive video games
Nintendo franchises
Nintendo Entertainment Analysis and Development games
Raster graphics editors
Video game franchises introduced in 1999
Video games developed in Japan
Video games scored by Kazumi Totaka
Artist |
58975563 | https://en.wikipedia.org/wiki/Landscape%20%28software%29 | Landscape (software) | Landscape is a systems management tool developed by Canonical. It can be run on-premises or in the cloud depending on the needs of the user. It is primarily designed for use with Ubuntu derivatives such as Desktop, Server, and Core. Landscape provides administrative tools, centralized package updates, machine grouping, script deployment, security audit compliance and custom software repositories for management of up to 40,000 instances.
Overview
Architecture
See also
Ansible (software)
Chef (software)
Puppet (software)
Salt (software)
Satellite (software)
References
External links
2007 software
Remote administration software
Software distribution
Systems management |
59337444 | https://en.wikipedia.org/wiki/List%20of%20International%20Organization%20for%20Standardization%20standards%2C%2018000-19999 | List of International Organization for Standardization standards, 18000-19999 | This is a list of published International Organization for Standardization (ISO) standards and other deliverables. For a complete and up-to-date list of all the ISO standards, see the ISO catalogue.
The standards are protected by copyright and most of them must be purchased. However, about 300 of the standards produced by ISO and IEC's Joint Technical Committee 1 (JTC1) have been made freely and publicly available.
ISO 18000 – ISO 18999
ISO/IEC 18000 Information technology – Radio frequency identification for item management
ISO/IEC 18004:2015 Information technology – Automatic identification and data capture techniques – QR Code bar code symbology specification
ISO/IEC 18009:1999 Information technology – Programming languages – Ada: Conformity assessment of a language processor
ISO/IEC 18010:2002 Information technology - Pathways and spaces for customer premises cabling
ISO/IEC 18012 Information technology - Home Electronic System - Guidelines for product interoperability
ISO/IEC 18012-1:2004 Part 1: Introduction
ISO/IEC 18012-2:2012 Part 2: Taxonomy and application interoperability model
ISO/IEC 18013 Information technology – Personal identification – ISO-compliant driving license
ISO/IEC 18013-1:2005 Part 1: Physical characteristics and basic data set
ISO/IEC 18013-2:2008 Part 2: Machine-readable technologies
ISO/IEC 18013-3:2017 Part 3: Access control, authentication and integrity validation
ISO/IEC 18013-4:2019 Part 4: Test methods
ISO/IEC 18013-5 Part 5: Mobile driving licence (mDL) application
ISO/IEC 18014 Information technology – Security techniques – Time-stamping services
ISO/IEC TR 18015:2006 Information technology - Programming languages, their environments and system software interfaces - Technical Report on C++ Performance
ISO/IEC TR 18016:2003 Information technology – Message Handling Systems (MHS): Interworking with Internet e-mail
ISO/IEC 18017:2001 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Mapping functions for the employment of Virtual Private Network scenarios
ISO/IEC TR 18018:2010 Information technology - Systems and software engineering - Guide for configuration management tool capabilities
ISO/IEC 18021:2002 Information technology – User interfaces for mobile tools for management of database communications in a client-server model
ISO/IEC 18023 Information technology – SEDRIS
ISO/IEC 18023-1:2006 Part 1: Functional specification
ISO/IEC 18023-2:2006 Part 2: Abstract transmittal format
ISO/IEC 18023-3:2006 Part 3: Transmittal format binary encoding
ISO/IEC 18024 Information technology – SEDRIS language bindings
ISO/IEC 18024-4:2006 Part 4: C
ISO/IEC 18025:2005 Information technology – Environmental Data Coding Specification (EDCS)
ISO/IEC 18026:2009 Information technology – Spatial Reference Model (SRM)
ISO/IEC 18031:2011 Information technology - Security techniques - Random bit generation
ISO/IEC 18032:2005 Information technology - Security techniques - Prime number generation
ISO/IEC 18033 Information technology – Security techniques – Encryption algorithms
ISO/IEC 18033-1:2015 Part 1: General
ISO/IEC 18033-2:2006 Part 2: Asymmetric ciphers
ISO/IEC 18033-3:2010 Part 3: Block ciphers
ISO/IEC 18033-4:2011 Part 4: Stream ciphers
ISO/IEC 18033-5:2015 Part 5: Identity-based ciphers
ISO/IEC 18035:2003 Information technology – Icon symbols and functions for controlling multimedia software applications
ISO/IEC 18036:2003 Information technology - Icon symbols and functions for World Wide Web browser toolbars
ISO/IEC TR 18037:2008 Programming languages - C - Extensions to support embedded processors
ISO/IEC 18041 Information technology – Computer graphics, image processing and environmental data representation – Environmental Data Coding Specification (EDCS) language bindings
ISO/IEC 18041-4:2016 Part 4: C
ISO/IEC 18042 Information technology – Computer graphics and image processing – Spatial Reference Model (SRM) language bindings
ISO/IEC 18042-4:2006 Part 4: C
ISO/IEC 18045:2008 Information technology - Security techniques - Methodology for IT security evaluation
ISO/IEC 18046 Information technology - Radio frequency identification device performance test methods
ISO/IEC 18046-1:2011 Part 1: Test methods for system performance
ISO/IEC 18046-2:2011 Part 2: Test methods for interrogator performance
ISO/IEC 18046-3:2012 Part 3: Test methods for tag performance
ISO/IEC 18046-4:2015 Part 4: Test methods for performance of RFID gates in libraries
ISO/IEC 18047 Information technology - Radio frequency identification device conformance test methods
ISO/IEC 18047-2:2012 Part 2: Test methods for air interface communications below 135 kHz
ISO/IEC TR 18047-3:2011 Part 3: Test methods for air interface communications at 13,56 MHz
ISO/IEC TR 18047-4:2004 Part 4: Test methods for air interface communications at 2,45 GHz
ISO/IEC 18047-6:2012 Part 6: Test methods for air interface communications at 860 MHz to 960 MHz
ISO/IEC TR 18047-7:2010 Part 7: Test methods for active air interface communications at 433 MHz
ISO/IEC 18050:2006 Information technology - Office equipment - Print quality attributes for machine readable Digital Postage Marks
ISO/IEC 18051:2012 Information technology – Telecommunications and information exchange between systems – Services for Computer Supported Telecommunications Applications (CSTA) Phase III
ISO/IEC 18052:2012 Information technology – Telecommunications and information exchange between systems – ASN.1 for Computer Supported Telecommunications Applications (CSTA) Phase III
ISO/IEC TR 18053:2000 Information technology - Telecommunications and information exchange between systems - Glossary of definitions and terminology for Computer Supported Telecommunications Applications (CSTA) Phase III
ISO/IEC 18056:2012 Information technology – Telecommunications and information exchange between systems – XML Schema Definitions for Computer Supported Telecommunications Applications (CSTA) Phase III
ISO/IEC TR 18057:2004 Information technology – Telecommunications and information exchange between systems – Using ECMA-323 (CSTA XML) in a Voice Browser Environment
ISO/TS 18062:2016 Health informatics – Categorial structure for representation of herbal medicaments in terminological systems
ISO 18064:2014 Thermoplastic elastomers – Nomenclature and abbreviated terms
ISO 18065:2015 Tourism and related services – Tourist services for public use provided by Natural Protected Areas Authorities – Requirements
ISO 18082:2014 Anaesthetic and respiratory equipment – Dimensions of non-interchangeable screw-threaded (NIST) low-pressure connectors for medical gases
ISO 18091:2014 Quality management systems – Guidelines for the application of ISO 9001:2008 in local government
ISO/IEC 18092:2013 Information technology – Telecommunications and information exchange between systems – Near Field Communication – Interface and Protocol (NFCIP-1)
ISO/IEC 18093:1999 Information technology - Data interchange on 130 mm optical disk cartridges of type WORM (Write Once Read Many) using irreversible effects - Capacity: 5,2 Gbytes per cartridge
ISO 18104:2014 Health informatics – Categorial structures for representation of nursing diagnoses and nursing actions in terminological systems
ISO/TS 18110:2015 Nanotechnologies - Vocabularies for science, technology and innovation indicators
ISO 18115 Surface chemical analysis – Vocabulary
ISO 18115-1:2013 Part 1: General terms and terms used in spectroscopy
ISO 18115-2:2013 Part 2: Terms used in scanning-probe microscopy
ISO 18117:2009 Surface chemical analysis – Handling of specimens prior to analysis
ISO 18118:2004 Surface chemical analysis - Auger electron spectroscopy and X-ray photoelectron spectroscopy - Guide to the use of experimentally determined relative sensitivity factors for the quantitative analysis of homogeneous materials
ISO/IEC TR 18120:2016 Information technology - Learning, education, and training - Requirements for e-textbooks in education
ISO/IEC TR 18121:2015 Information technology - Learning, education and training - Virtual experiment framework
ISO/TR 18128:2014 Information and documentation - Risk assessment for records processes and systems
ISO 18129:2015 Condition monitoring and diagnostics of machines – Approaches for performance diagnosis
ISO/TS 18152 Ergonomics of human-system interaction – Specification for the process assessment of human-system issues
ISO 18158:2016 Workplace air - Terminology
ISO/TS 18173:2005 Non-destructive testing - General terms and definitions
ISO/IEC 18180:2013 Information technology - Specification for the Extensible Configuration Checklist Description Format (XCCDF) Version 1.2
ISO 18185 Freight containers – Electronic seals
ISO 18189:2016 Ophthalmic optics – Contact lenses and contact lens care products – Cytotoxicity testing of contact lenses in combination with lens care solution to evaluate lens/solution interactions
ISO 18190:2016 Anaesthetic and respiratory equipment – General requirements for airways and related equipment
ISO 18192 Implants for surgery – Wear of total intervertebral spinal disc prostheses
ISO 18192-1:2011 Part 1: Loading and displacement parameters for wear testing and corresponding environmental conditions for test
ISO 18192-2:2010 Part 2: Nucleus replacements
ISO 18192-3:2017 Part 3: Impingement-wear testing and corresponding environmental conditions for test of lumbar prostheses under adverse kinematic conditions
ISO/TR 18196:2016 Nanotechnologies – Measurement technique matrix for the characterization of nano-objects
ISO 18215:2015 Ships and marine technology - Vessel machinery operations in polar waters - Guidelines
ISO 18232:2006 Health Informatics – Messages and communication – Format of length limited globally unique string identifiers
ISO/TS 18234 Intelligent transport systems – Traffic and travel information via transport protocol experts group, generation 1 (TPEG1) binary data format
ISO/TS 18234-1:2013 Part 1: Introduction, numbering and versions (TPEG1-INV)
ISO/TS 18234-2:2013 Part 2: Syntax, semantics and framing structure (TPEG1-SSF)
ISO/TS 18234-3:2013 Part 3: Service and network information (TPEG1-SNI)
ISO/TS 18234-4:2006 Part 4: Road Traffic Message (RTM) application
ISO/TS 18234-5:2006 Part 5: Public Transport Information (PTI) application
ISO/TS 18234-6:2006 Part 6: Location referencing applications
ISO/TS 18234-7:2013 Part 7: Parking information (TPEG1-PKI)
ISO/TS 18234-8:2012 Part 8: Congestion and Travel Time application (TPEG1-CTT)
ISO/TS 18234-9:2013 Part 9: Traffic event compact (TPEG1-TEC)
ISO/TS 18234-10:2013 Part 10: Conditional access information (TPEG1-CAI)
ISO/TS 18234-11:2013 Part 11: Location Referencing Container (TPEG1-LRC)
ISO 18241:2016 Cardiovascular implants and extracorporeal systems – Cardiopulmonary bypass systems – Venous bubble traps
ISO 18242:2016 Cardiovascular implants and extracorporeal systems – Centrifugal blood pumps
ISO 18245:2003 Retail financial services – Merchant category codes
ISO 18259:2014 Ophthalmic optics – Contact lens care products – Method to assess contact lens care products with contact lenses in a lens case, challenged with bacterial and fungal organisms
ISO/IEC TR 18268:2013 Identification cards – Contactless integrated circuit cards – Proximity cards – Multiple PICCs in a single PCD field
ISO 18295 Customer contact centres
ISO 18295-1:2017 Part 1: Requirements for customer contact centres
ISO 18295-2:2017 Part 2: Requirements for clients using the services of customer contact centres
ISO/IEC 18305:2016 Information technology - Real time locating systems - Test and evaluation of localization and tracking systems
ISO/TR 18307:2001 Health informatics – Interoperability and compatibility in messaging and communication standards – Key characteristics
ISO 18308:2011 Health informatics – Requirements for an electronic health record architecture
ISO 18312 Mechanical vibration and shock – Measurement of vibration power flow from machines into connected support structures
ISO 18312-1:2012 Part 1: Direct method
ISO 18312-2:2012 Part 2: Indirect method
ISO/TR 18317:2017 Intelligent transport systems – Pre-emption of ITS communication networks for disaster and emergency communication – Use case scenarios
ISO 18322:2017 Space systems – General management requirements for space test centres
ISO/IEC 18328 Identification cards – ICC-managed devices
ISO/IEC 18328-1:2015 Part 1: General framework
ISO/IEC 18328-2:2015 Part 2: Physical characteristics and test methods for cards with devices
ISO/IEC 18328-3:2016 Part 3: Organization, security and commands for interchange
ISO/TS 18339:2015 Endotherapy devices – Eyepiece cap and light guide connector
ISO/TS 18340:2015 Endoscopes – Trocar pins, trocar sleeves and endotherapy devices for use with trocar sleeves
ISO/TS 18344:2016 Effectiveness of paper deacidification processes
ISO 18365:2013 Hydrometry – Selection, establishment and operation of a gauging station
ISO/IEC 18367:2016 Information technology - Security techniques - Cryptographic algorithms and security mechanisms conformance testing
ISO 18369 Ophthalmic optics - Contact lenses
ISO 18369-1:2017 Part 1: Vocabulary, classification system and recommendations for labelling specifications
ISO 18369-2:2017 Part 2: Tolerances
ISO 18369-3:2017 Part 3: Measurement methods
ISO 18369-4:2017 Part 4: Physicochemical properties of contact lens materials
ISO/IEC 18370 Information technology - Security techniques - Blind digital signatures
ISO/IEC 18370-1:2016 Part 1: General
ISO/IEC 18370-2:2016 Part 2: Discrete logarithm based mechanisms
ISO/IEC 18372:2004 Information technology – RapidIO interconnect specification
ISO/TR 18476:2017 Ophthalmic optics and instruments – Free form technology – Spectacle lenses and measurement
ISO/IEC 18384 Information technology - Reference Architecture for Service Oriented Architecture (SOA RA)
ISO/IEC 18384-1:2016 Part 1: Terminology and concepts for SOA
ISO/IEC 18384-2:2016 Part 2: Reference Architecture for SOA Solutions
ISO/IEC 18384-3:2016 Part 3: Service Oriented Architecture ontology
ISO 18385:2016 Minimizing the risk of human DNA contamination in products used to collect, store and analyze biological material for forensic purposes – Requirements
ISO 18388:2016 Technical product documentation (TPD) – Relief grooves – Types and dimensioning
ISO 18391:2016 Geometrical product specifications (GPS) - Population specification
ISO/TR 18401:2017 Nanotechnologies - Plain language explanation of selected terms from the ISO/IEC 80004 series
ISO 18404:2015 Quantitative methods in process improvement - Six Sigma - Competencies for key personnel and their organizations in relation to Six Sigma and Lean implementation
ISO 18405:2017 Underwater acoustics - Terminology
ISO 18406:2017 Underwater acoustics – Measurement of radiated underwater sound from percussive pile driving
ISO 18414:2006 Acceptance sampling procedures by attributes - Accept-zero sampling system based on credit principle for controlling outgoing quality
ISO 18415:2017 Cosmetics – Microbiology – Detection of specified and non-specified microorganisms
ISO 18416:2015 Cosmetics – Microbiology – Detection of Candida albicans
ISO 18431 Mechanical vibration and shock – Signal processing
ISO 18431-1:2005 Part 1: General introduction
ISO 18431-2:2004 Part 2: Time domain windows for Fourier Transform analysis
ISO 18431-3:2014 Part 3: Methods of time-frequency analysis
ISO 18431-4:2007 Part 4: Shock-response spectrum analysis
ISO 18434 Condition monitoring and diagnostics of machines – Thermography
ISO 18434-1:2008 Part 1: General procedures
ISO 18436 Condition monitoring and diagnostics of machines – Requirements for qualification and assessment of personnel
ISO 18436-1:2012 Part 1: Requirements for assessment bodies and the assessment process
ISO 18436-2:2014 Part 2: Vibration condition monitoring and diagnostics
ISO 18436-3:2012 Part 3: Requirements for training bodies and the training process
ISO 18436-4:2014 Part 4: Field lubricant analysis
ISO 18436-5:2012 Part 5: Lubricant laboratory technician/analyst
ISO 18436-6:2014 Part 6: Acoustic emission
ISO 18436-7:2014 Part 7: Thermography
ISO 18436-8:2013 Part 8: Ultrasound
ISO 18437 Mechanical vibration and shock – Characterization of the dynamic mechanical properties of visco-elastic materials
ISO 18437-1:2012 Part 1: Principles and guidelines
ISO 18437-2:2005 Part 2: Resonance method
ISO 18437-3:2005 Part 3: Cantilever shear beam method
ISO 18437-4:2008 Part 4: Dynamic stiffness method
ISO 18437-5:2011 Part 5: Poisson ratio based on comparison between measurements and finite element analysis
ISO/IEC 18450:2013 Information technology - Telecommunications and information exchange between systems - Web Services Description Language (WSDL) for CSTA Phase III
ISO 18451 Pigments, dyestuffs and extenders – Terminology
ISO 18451-1:2015 Part 1: General terms
ISO 18457:2016 Biomimetics – Biomimetic materials, structures and components
ISO 18458:2015 Biomimetics – Terminology, concepts and methodology
ISO 18459:2015 Biomimetics – Biomimetic structural optimization
ISO 18461:2016 International museum statistics
ISO 18465:2017 Microbiology of the food chain - Quantitative determination of emetic toxin (cereulide) using LC-MS/MS
ISO/IEC 18477 Information technology - Scalable compression and coding of continuous-tone still images
ISO/IEC 18477-1:2015 Part 1: Scalable compression and coding of continuous-tone still images
ISO/IEC 18477-2:2016 Part 2: Coding of high dynamic range images
ISO/IEC 18477-3:2015 Part 3: Box file format
ISO/IEC 18477-6:2016 Part 6: IDR Integer Coding
ISO/IEC 18477-7:2017 Part 7: HDR Floating-Point Coding
ISO/IEC 18477-8:2016 Part 8: Lossless and near-lossless coding
ISO/IEC 18477-9:2016 Part 9: Alpha channel coding
ISO 18490:2015 Non-destructive testing – Evaluation of vision acuity of NDT personnel
ISO 18495 Intelligent transport systems – Commercial freight – Automotive visibility in the distribution supply chain
ISO 18495-1:2016 Part 1: Architecture and data definitions
ISO/IEC TS 18508:2015 Information technology - Additional Parallel Features in Fortran
ISO 18513:2003 Tourism services - Hotels and other types of tourism accommodation - Terminology
ISO/TR 18529:2000 Ergonomics – Ergonomics of human-system interaction – Human-centred lifecycle process descriptions
ISO/TS 18530:2014 Health Informatics – Automatic identification and data capture marking and labelling – Subject of care and individual provider identification
ISO/TR 18532:2009 Guidance on the application of statistical methods to quality and to industrial standardization
ISO 18541 Road vehicles – Standardized access to automotive repair and maintenance information (RMI)
ISO 18542 Road vehicles – Standardized repair and maintenance information (RMI) terminology
ISO 18542-1:2012 Part 1: General information and use case definition
ISO 18542-2:2014 Part 2: Standardized process implementation requirements, Registration Authority
ISO 18562 Biocompatibility evaluation of breathing gas pathways in healthcare applications
ISO 18562-1:2017 Part 1: Evaluation and testing within a risk management process
ISO 18562-2:2017 Part 2: Tests for emissions of particulate matter
ISO 18562-3:2017 Part 3: Tests for emissions of volatile organic compounds (VOCs)
ISO 18562-4:2017 Part 4: Tests for leachables in condensate
ISO 18564:2016 Machinery for forestry – Noise test code
ISO/IEC 18584:2015 Information technology – Identification cards – Conformance test requirements for on-card biometric comparison applications
ISO 18587:2017 Translation services - Post-editing of machine translation output - Requirements
ISO 18589 Measurement of radioactivity in the environment - Soil
ISO 18589-1:2005 Part 1: General guidelines and definitions
ISO 18589-2:2015 Part 2: Guidance for the selection of the sampling strategy, sampling and pre-treatment of samples
ISO 18589-3:2015 Part 3: Test method of gamma-emitting radionuclides using gamma-ray spectrometry
ISO 18589-4:2009 Part 4: Measurement of plutonium isotopes (plutonium 238 and plutonium 239 + 240) by alpha spectrometry
ISO 18589-5:2009 Part 5: Measurement of strontium 90
ISO 18589-6:2009 Part 6: Measurement of gross alpha and gross beta activities
ISO 18589-7:2013 Part 7: In situ measurement of gamma-emitting radionuclides
ISO 18593:2004 Microbiology of food and animal feeding stuffs – Horizontal methods for sampling techniques from surfaces using contact plates and swabs
ISO/IEC 18598:2016 Information technology - Automated infrastructure management (AIM) systems - Requirements, data exchange and applications
ISO 18600:2015 Textile machinery and accessories – Web roller cards – Terms and definitions
ISO 18601:2013 Packaging and the environment - General requirements for the use of ISO standards in the field of packaging and the environment
ISO 18602:2013 Packaging and the environment—Optimization of the packaging system
ISO 18603:2013 Packaging and the environment—Reuse
ISO 18604:2013 Packaging and the environment—Material recycling
ISO 18605:2013 Packaging and the environment—Energy recovery
ISO 18606:2013 Packaging and the environment—Organic recycling
ISO/TS 18614:2016 Packaging - Label material - Required information for ordering and specifying self-adhesive labels
ISO 18619:2015 Image technology colour management - Black point compensation
ISO 18626:2017 Information and documentation - Interlibrary Loan Transactions
ISO 18629 Industrial automation systems and integration – Process specification language
ISO/TR 18637:2016 Nanotechnologies – Overview of available frameworks for the development of occupational exposure limits and bands for nano-objects and their aggregates and agglomerates (NOAAs)
ISO/TR 18638:2017 Health informatics – Guidance on health information privacy education in healthcare organizations
ISO 18649:2004 Mechanical vibration – Evaluation of measurement results from dynamic tests and investigations on bridges
ISO 18650 Building construction machinery and equipment – Concrete mixers
ISO 18650-1:2004 Part 1: Vocabulary and general specifications
ISO/IEC TS 18661 Information technology - Programming languages, their environments, and system software interfaces - Floating-point extensions for C
ISO/IEC TS 18661-1:2014 Part 1: Binary floating-point arithmetic
ISO/IEC TS 18661-2:2015 Part 2: Decimal floating-point arithmetic
ISO/IEC TS 18661-3:2015 Part 3: Interchange and extended types
ISO/IEC TS 18661-4:2015 Part 4: Supplementary functions
ISO/IEC TS 18661-5:2016 Part 5: Supplementary attributes
ISO 18662 Traditional Chinese medicine - Vocabulary
ISO 18662-1:2017 Part 1: Chinese Materia Medica
ISO 18665:2015 Traditional Chinese medicine – Herbal decoction apparatus
ISO 18666:2015 Traditional Chinese medicine – General requirements of moxibustion devices
ISO 18682:2016 Intelligent transport systems – External hazard detection and notification systems – Basic requirements
ISO 18739:2016 Dentistry - Vocabulary of process chain for CAD/CAM systems
ISO 18743:2015 Microbiology of the food chain – Detection of Trichinella larvae in meat by artificial digestion method
ISO 18744:2016 Microbiology of the food chain – Detection and enumeration of Cryptosporidium and Giardia in fresh leafy green vegetables and berry fruits
ISO/IEC 18745 Information technology – Test methods for machine readable travel documents (MRTD) and associated devices
ISO/IEC 18745-1:2014 Part 1: Physical test methods for passport books (durability)
ISO/IEC 18745-2:2016 Part 2: Test methods for the contactless interface
ISO 18746:2016 Traditional Chinese medicine – Sterile intradermal acupuncture needles for single use
ISO/TS 18750:2015 Intelligent transport systems – Cooperative systems – Definition of a global concept for Local Dynamic Maps
ISO/PAS 18761:2013 Use and handling of medical devices covered by the scope of ISO/TC 84 – Risk assessment on mucocutaneous blood exposure
ISO 18774:2015 Securities and related financial instruments – Financial Instrument Short Name (FISN)
ISO 18775:2008 Veneers – Terms and definitions, determination of physical characteristics and tolerances
ISO 18777:2005 Transportable liquid oxygen systems for medical use – Particular requirements
ISO 18778:2005 Respiratory equipment – Infant monitors – Particular requirements
ISO 18788:2015 Management system for private security operations – Requirements with guidance for use
ISO/TS 18790 Health informatics – Profiling framework and classification for Traditional Medicine informatics standards development
ISO/TS 18790-1:2015 Part 1: Traditional Chinese Medicine
ISO/IEC 18809:2000 Information technology – 8 mm wide magnetic tape cartridge for information interchange – Helical scan recording AIT-1 with MIC format
ISO/IEC 18810:2001 Information technology – 8 mm wide magnetic tape cartridge for information interchange – Helical scan recording AIT-2 with MIC format
ISO 18812:2003 Health informatics – Clinical analyser interfaces to laboratory information systems – Use profiles
ISO/IEC TS 18822:2015 Programming languages - C++ - File System Technical Specification
ISO/TS 18827:2017 Nanotechnologies – Electron spin resonance (ESR) as a method for measuring reactive oxygen species (ROS) generated by metal oxide nanomaterials
ISO 18829:2017 Document management - Assessing ECM/EDRM implementations - Trustworthiness
ISO 18831:2016 Clothing – Digital fittings – Attributes of virtual garments
ISO 18835:2015 Inhalational anaesthesia systems – Draw-over anaesthetic systems
ISO/IEC 18836:2001 Information technology – 8 mm wide magnetic tape cartridge for information interchange – Helical scan recording – MammothTape-2 format
ISO 18841:2018 Interpreting services — General requirements and recommendations
ISO/TR 18845:2017 Dentistry - Test methods for machining accuracy of computer-aided milling machines
ISO/TS 18867:2015 Microbiology of the food chain – Polymerase chain reaction (PCR) for the detection of food-borne pathogens – Detection of pathogenic Yersinia enterocolitica and Yersinia pseudotuberculosis
ISO 18875:2015 Coalbed methane exploration and development – Terms and definitions
ISO/TS 18876 Industrial automation systems and integration - Integration of industrial data for exchange, access and sharing
ISO/TS 18876-1:2003 Part 1: Architecture overview and description
ISO/TS 18876-2:2003 Part 2: Integration and mapping methodology
ISO 18878:2013 Mobile elevating work platforms – Operator (driver) training
ISO/IEC/IEEE 18880:2015 Information technology – Ubiquitous green community control network protocol
ISO/IEC/IEEE 18881:2016 Information technology – Ubiquitous green community control network – Control and management
ISO/IEC/IEEE 18882:2017 Information technology – Telecommunications and information exchange between systems – Ubiquitous green community control network: Heterogeneous networks convergence and scalability
ISO/IEC/IEEE 18883:2016 Information technology – Ubiquitous green community control network – Security
ISO 18913:2012 Imaging materials - Permanence - Vocabulary
ISO 18921:2008 Imaging materials - Compact discs (CD-ROM) - Method for estimating the life expectancy based on the effects of temperature and relative humidity
ISO 18925:2013 Imaging materials - Optical disc media - Storage practices
ISO 18926:2012 Imaging materials - Information stored on magneto-optical (MO) discs - Method for estimating the life expectancy based on the effects of temperature and relative humidity
ISO 18927:2013 Imaging materials - Recordable compact disc systems - Method for estimating the life expectancy based on the effects of temperature and relative humidity
ISO 18933:2012 Imaging materials – Magnetic tape – Care and handling practices for extended usage
ISO 18938:2014 Imaging materials - Optical discs - Care and handling for extended storage
ISO 19000 – ISO 19999
ISO 19001:2013 In vitro diagnostic medical devices – Information supplied by the manufacturer with in vitro diagnostic reagents for staining in biology
ISO 19005 Document management – Electronic document file format for long-term preservation
ISO/TS 19006:2016 Nanotechnologies – 5-(and 6)-Chloromethyl-2’,7’ Dichloro-dihydrofluorescein diacetate (CM-H2DCF-DA) assay for evaluating nanoparticle-induced intracellular reactive oxygen species (ROS) production in RAW 264.7 macrophage cell line
ISO 19011:2011 Guidelines for auditing management systems
ISO 19014 Earth Moving Machinery - Functional Safety
ISO 19017:2015 Guidance for gamma spectrometry measurement of radioactive waste
ISO 19018:2004 Ships and marine technology - Terms, abbreviations, graphical symbols and concepts on navigation
ISO 19019:2005 Sea-going vessels and marine technology - Instructions for planning, carrying out and reporting sea trials
ISO 19020:2017 Microbiology of the food chain – Horizontal method for the immunoenzymatic detection of staphylococcal enterotoxins in foodstuffs
ISO/TR 19024:2016 Evaluation of CPB devices relative to their capabilities of reducing the transmission of gaseous microemboli (GME) to a patient during cardiopulmonary bypass
ISO 19028:2016 Accessible design - Information contents, figuration and display methods of tactile guide maps
ISO/TS 19036:2006 Microbiology of food and animal feeding stuffs – Guidelines for the estimation of measurement uncertainty for quantitative determinations
ISO/TR 19038:2005 Banking and related financial services – Triple DEA – Modes of operation – Implementation guidelines
ISO 19045:2015 Ophthalmic optics – Contact lens care products – Method for evaluating Acanthamoeba encystment by contact lens care products
ISO 19054:2005 Rail systems for supporting medical equipment
ISO/TR 19057:2017 Nanotechnologies – Use and application of acellular in vitro tests and methodologies to assess nanomaterial biodurability
ISO/IEC 19058:2001 Information technology – Telecommunications and information exchange between systems – Broadband Private Integrated Services Network – Inter-exchange signalling protocol – Generic functional protocol
ISO/IEC TR 19075 Information technology database languages — Guidance for the use of database language SQL
ISO 19079:2016 Intelligent transport systems – Communications access for land mobiles (CALM) – 6LoWPAN networking
ISO 19080:2016 Intelligent transport systems – Communications access for land mobiles (CALM) – CoAP facility
ISO/TR 19083 Intelligent transport systems – Emergency evacuation and disaster response and recovery
ISO/TR 19083-1:2016 Part 1: Framework and concept of operation
ISO/IEC 19086 Information technology - Cloud computing - Service level agreement (SLA) framework
ISO/IEC 19086-1:2016 Part 1: Overview and concepts
ISO/IEC 19086-3:2017 Part 3: Core conformance requirements
ISO/TS 19091:2017 Intelligent transport systems – Cooperative ITS – Using V2I and I2V communications for applications related to signalized intersections
ISO 19092:2008 Financial services – Biometrics – Security framework
ISO/IEC 19099:2014 Information technology - Virtualization Management Specification
ISO 19101 Geographic information – Reference model
ISO 19101-1:2014 Part 1: Fundamentals
ISO/TS 19101-2:2008 Part 2: Imagery
ISO/TS 19103:2015 Geographic information – Conceptual schema language
ISO/TS 19104:2016 Geographic information – Terminology
ISO 19105:2000 Geographic information – Conformance and testing
ISO 19106:2004 Geographic information – Profiles
ISO 19107:2003 Geographic information – Spatial schema
ISO 19108:2002 Geographic information – Temporal schema
ISO 19109:2015 Geographic information – Rules for application schema
ISO 19110:2016 Geographic information – Methodology for feature cataloguing
ISO 19111:2007 Geographic information – Spatial referencing by coordinates
ISO 19111-2:2009 Part 2: Extension for parametric values
ISO 19112:2003 Geographic information – Spatial referencing by geographic identifiers
ISO 19113:2002 Geographic information – Quality principles [Withdrawn: replaced by ISO 19157:2013]
ISO 19114:2003 Geographic information – Quality evaluation procedures [Withdrawn: replaced by ISO 19157:2013]
ISO 19115 Geographic information – Metadata
ISO 19115-1:2014 Part 1: Fundamentals
ISO 19115-2:2009 Part 2: Extensions for imagery and gridded data
ISO/TS 19115-3:2016 Part 3: XML schema implementation for fundamental concepts
ISO 19116:2004 Geographic information – Positioning services
ISO 19117:2012 Geographic information – Portrayal
ISO 19118:2011 Geographic information – Encoding
ISO 19119:2016 Geographic information – Services
ISO/TR 19120:2001 Geographic information – Functional standards
ISO/TR 19121:2000 Geographic information – Imagery and gridded data
ISO/TR 19122:2004 Geographic information / Geomatics – Qualification and certification of personnel
ISO 19123:2005 Geographic information – Schema for coverage geometry and functions
ISO 19125 Geographic information – Simple feature access
ISO 19126:2009 Geographic information – Feature concept dictionaries and registers
ISO/TS 19127:2005 Geographic information – Geodetic codes and parameters
ISO 19128:2005 Geographic information – Web map server interface
ISO/TS 19129:2009 Geographic information – Imagery, gridded and coverage data framework
ISO/TS 19130:2010 Geographic information – Imagery sensor models for geopositioning
ISO/TS 19130-2:2014 Part 2: SAR, InSAR, lidar and sonar
ISO 19131:2007 Geographic information – Data product specifications
ISO 19132:2007 Geographic information – Location-based services – Reference model
ISO 19133:2005 Geographic information – Location-based services – Tracking and navigation
ISO 19134:2007 Geographic information – Location-based services – Multimodal routing and navigation
ISO 19135 Geographic information – Procedures for item registration
ISO 19135-1:2015 Part 1: Fundamentals
ISO/TS 19135-2:2012 Part 2: XML schema implementation
ISO 19136:2007 Geographic information – Geography Markup Language (GML)
ISO 19136-2:2015 Part 2: Extended schemas and encoding rules
ISO 19137:2007 Geographic information – Core profile of the spatial schema
ISO/TS 19138:2006 Geographic information – Data quality measures [Withdrawn: replaced by ISO 19157:2013]
ISO/TS 19139:2007 Geographic information – Metadata – XML schema implementation
ISO/TS 19139-2:2012 Part 2: Extensions for imagery and gridded data
ISO 19141:2008 Geographic information – Schema for moving features
ISO 19142:2010 Geographic information – Web Feature Service
ISO 19143:2010 Geographic information – Filter encoding
ISO 19144 Geographic information – Classification systems
ISO 19144-1:2009 Part 1: Classification system structure
ISO 19144-2:2012 Part 2: Land Cover Meta Language (LCML)
ISO 19145:2013 Geographic information – Registry of representations of geographic point location
ISO 19146:2010 Geographic information – Cross-domain vocabularies
ISO 19147:2015 Geographic information – Transfer Nodes
ISO 19148:2012 Geographic information – Linear referencing
ISO 19149:2011 Geographic information – Rights expression language for geographic information - GeoREL
ISO 19150 Geographic information – Ontology
ISO/TS 19150-1:2012 Part 1: Framework
ISO 19150-2:2015 Part 2: Rules for developing ontologies in the Web Ontology Language (OWL)
ISO 19152:2012 Geographic information – Land Administration Domain Model (LADM)
ISO 19153:2014 Geospatial Digital Rights Management Reference Model (GeoDRM RM)
ISO 19154:2014 Geographic information – Ubiquitous public access – Reference model
ISO 19155:2012 Geographic information – Place Identifier (PI) architecture
ISO 19155-2:2017 Part 2: Place Identifier (PI) linking
ISO 19156 Geographic information - Observations and measurements
ISO 19157:2013 Geographic information – Data quality
ISO/TS 19157-2:2016 Part 2: XML schema implementation
ISO/TS 19158:2012 Geographic information – Quality assurance of data supply
ISO/TS 19159 Geographic information – Calibration and validation of remote sensing imagery sensors and data
ISO/TS 19159-1:2014 Part 1: Optical sensors
ISO/TS 19159-2:2016 Part 2: Lidar
ISO 19160 Addressing
ISO 19160-1:2015 Part 1: Conceptual model
ISO 19162:2015 Geographic information – Well-known text representation of coordinate reference systems
ISO/TS 19163 Geographic information – Content components and encoding rules for imagery and gridded data
ISO/TS 19163-1:2016 Part 1: Content model
ISO/TR 19201:2013 Mechanical vibration – Methodology for selecting appropriate machinery vibration standards
ISO 19204:2017 Soil quality - Procedure for site-specific ecological risk assessment of soil contamination (soil quality TRIAD approach)
ISO 19213:2017 Implants for surgery – Test methods of material for use as a cortical bone model
ISO/IEC TS 19216:2018 Programming Languages – C++ Extensions for Networking
ISO/IEC TS 19217:2015 Information technology - Programming languages - C++ Extensions for concepts
ISO/TS 19218 Medical devices – Hierarchical coding structure for adverse events
ISO/TS 19218-1:2011 Part 1: Event-type codes
ISO/TS 19218-2:2012 Part 2: Evaluation codes
ISO/TR 19231:2014 Health informatics – Survey of mHealth projects in low and middle income countries (LMIC)
ISO 19233 Implants for surgery – Orthopaedic joint prosthesis
ISO 19233-1:2017 Part 1: Procedure for producing parametric 3D bone models from CT data of the knee
ISO/TR 19234:2016 Hydrometry – Low cost baffle solution to aid fish passage at triangular profile weirs that conform to ISO 4360
ISO 19238:2014 Radiological protection - Performance criteria for service laboratories performing biological dosimetry by cytogenetics
ISO/TR 19244:2014 Guidance on transition periods for standards developed by ISO/TC 84 – Devices for administration of medicinal products and catheters
ISO 19250:2010 Water quality – Detection of Salmonella spp.
ISO/TS 19256:2016 Health informatics – Requirements for medicinal product dictionary systems for health care
ISO 19262:2015 Photography - Archiving Systems - Vocabulary
ISO/TR 19263 Photography - Archiving systems
ISO/TR 19263-1:2017 Part 1: Best practices for digital image capture of cultural heritage material
ISO/TS 19264 Photography - Archiving systems - Image quality analysis
ISO/TS 19264-1:2017 Part 1: Reflective originals
ISO 19289:2015 Air quality – Meteorology – Siting classifications for surface observing stations on land
ISO/TS 19299:2015 Electronic fee collection – Security framework
ISO/TR 19300:2015 Graphic technology – Guidelines for the use of standards for print media production
ISO/TS 19321:2015 Intelligent transport systems – Cooperative ITS – Dictionary of in-vehicle information (IVI) data structures
ISO/TS 19337:2016 Nanotechnologies – Characteristics of working suspensions of nano-objects for in vitro assays to evaluate inherent nano-object toxicity
ISO 19343:2017 Microbiology of the food chain – Detection and quantification of histamine in fish and fishery products – HPLC method
ISO 19361:2017 Measurement of radioactivity - Determination of beta emitters activities - Test method using liquid scintillation counting
ISO/IEC 19369:2014 Information technology – Telecommunications and information exchange between systems – NFCIP-2 test methods
ISO/IEC 19395:2015 Information technology - Sustainability for and by information technology - Smart data centre resource monitoring and control
ISO 19403 Paints and varnishes – Wettability
ISO 19403-1:2017 Part 1: Terminology and general principles
ISO 19403-2:2017 Part 2: Determination of the surface free energy of solid surfaces by measuring the contact angle
ISO 19403-3:2017 Part 3: Determination of the surface tension of liquids using the pendant drop method
ISO 19403-4:2017 Part 4: Determination of the polar and dispersive fractions of the surface tension of liquids from an interfacial tension
ISO 19403-5:2017 Part 5: Determination of the polar and dispersive fractions of the surface tension of liquids from contact angles measurements on a solid with only a disperse contribution to its surface energy
ISO 19403-6:2017 Part 6: Measurement of dynamic contact angle
ISO 19403-7:2017 Part 7: Measurement of the contact angle on a tilt stage (roll-off angle)
ISO/TS 19408:2015 Footwear – Sizing – Vocabulary and terminology
ISO 19439 Enterprise integration – Framework for enterprise modelling
ISO 19440 Enterprise integration – Constructs for enterprise modelling
ISO 19443:2018 Quality management in the supply chain for the Nuclear industry, based on ISO9001, to optimize safety and quality in supplying products and services (ITNS)
ISO 19444 Document management - XML Forms Data Format
ISO 19444-1:2016 Part 1: Use of ISO 32000-2 (XFDF 3.0)
ISO 19445:2016 Graphic technology - Metadata for graphic arts workflow - XMP metadata for image and document proofing
ISO/IEC TR 19446:2015 Differences between the driving licences based on the ISO/IEC 18013 series and the European Union specifications
ISO/IEC 19459:2001 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Specification, functional model and information flows – Single Step Call Transfer Supplementary Service
ISO/IEC 19460:2003 Information technology – Telecommunications and information exchange between systems – Private Integrated Services Network – Inter-exchange signalling protocol – Single Step Call Transfer supplementary service
ISO/IEC 19464:2014 Information technology – Advanced Message Queuing Protocol (AMQP) v1.0 specification
ISO 19465:2017 Traditional Chinese medicine - Categories of traditional Chinese medicine (TCM) clinical terminological systems
ISO 19467:2017 Thermal performance of windows and doors - Determination of solar heat gain coefficient using solar simulator
ISO/CIE 19476:2014 Characterization of the performance of illuminance meters and luminance meters
ISO/TR 19480:2005 Polyethylene pipes and fittings for the supply of gaseous fuels or water – Training and assessment of fusion operators
ISO 19496 Vitreous and porcelain enamels - Terminology
ISO 19496-1:2017 Part 1: Terms and definitions
ISO 19496-2:2017 Part 2: Visual representations and descriptions
ISO/TR 19498:2015 Ophthalmic optics and instruments – Correlation of optotypes
ISO/IEC 19500 Information technology - Object Management Group - Common Object Request Broker Architecture (CORBA)
ISO/IEC 19500-1:2012 Part 1: Interfaces
ISO/IEC 19500-2:2012 Part 2: Interoperability
ISO/IEC 19500-3:2012 Part 3: Components
ISO/IEC 19501 Information technology – Open Distributed Processing – Unified Modeling Language (UML) Version 1.4.2
ISO/IEC 19502 Information technology – Meta Object Facility (MOF)
ISO/IEC 19503 Information technology – XML Metadata Interchange (XMI)
ISO/IEC 19505 Information technology - Object Management Group Unified Modeling Language (OMG UML)
ISO/IEC 19505-1:2012 Part 1: Infrastructure
ISO/IEC 19505-2:2012 Part 2: Superstructure
ISO/IEC 19506:2012 Information technology—Object Management Group Architecture-Driven Modernization (ADM) - Knowledge Discovery Meta-Model (KDM)
ISO/IEC 19507:2012 Information technology - Object Management Group Object Constraint Language (OCL)
ISO/IEC 19508:2014 Information technology - Object Management Group Meta Object Facility (MOF) Core
ISO/IEC 19509:2014 Information technology - Object Management Group XML Metadata Interchange (XMI)
ISO/IEC 19510:2013 Information technology - Object Management Group Business Process Model and Notation
ISO/IEC 19514:2017 Information technology - Object management group systems modeling language (OMG SysML)
ISO/IEC 19516:2020 Information technology — Object management group — Interface definition language (IDL) 4.2
ISO/IEC TR 19566 Information technology - JPEG Systems
ISO/IEC TR 19566-1:2016 Part 1: Packaging of information using codestreams and file formats
ISO/IEC TR 19566-2:2016 Part 2: Transport mechanisms and packaging
ISO/IEC TS 19568:2017 Programming Languages - C++ Extensions for Library Fundamentals
ISO/IEC TS 19570:2018 Programming Languages – Technical Specification for C++ Extensions for Parallelism
ISO/IEC TS 19571:2016 Programming Languages - Technical specification for C++ extensions for concurrency
ISO/TS 19590:2017 Nanotechnologies – Size distribution and concentration of inorganic nanoparticles in aqueous media via single particle inductively coupled plasma mass spectrometry
ISO/IEC 19592 Information technology - Security techniques - Secret sharing
ISO/IEC 19592-1:2016 Part 1: General
ISO 19600 Compliance management systems - Guidelines
ISO/TR 19601:2017 Nanotechnologies – Aerosol generation for air exposure studies of nano-objects and their aggregates and agglomerates (NOAA)
ISO 19611:2017 Traditional Chinese medicine – Air extraction cupping device
ISO 19614:2017 Traditional Chinese medicine – Pulse graph force transducer
ISO/IEC 19637:2016 Information technology – Sensor network testing framework
ISO/TR 19639:2015 Electronic fee collection – Investigation of EFC standards for common payment schemes for multi-modal transport services
ISO 19649:2017 Mobile robots - Vocabulary
ISO/TR 19664:2017 Human response to vibration – Guidance and terminology for instrumentation and equipment for the assessment of daily vibration exposure at the workplace according to the requirements of health and safety
ISO/IEC 19678:2015 Information Technology - BIOS Protection Guidelines
ISO 19709 Transport packaging - Small load container systems
ISO 19709-1:2016 Part 1: Common requirements and test methods
ISO/TS 19709-2:2016 Part 2: Column Stackable System (CSS)
ISO/TS 19709-3:2016 Part 3: Bond Stackable System (BSS)
ISO/TR 19716:2016 Nanotechnologies – Characterization of cellulose nanocrystals
ISO 19719:2010 Machine tools - Work holding chucks - Vocabulary
ISO 19720 Building construction machinery and equipment – Plants for the preparation of concrete and mortar
ISO 19720-1:2017 Part 1: Terminology and commercial specifications
ISO/TR 19727:2017 Medical devices – Pump tube spallation test – General procedure
ISO 19731:2017 Digital analytics and web analyses for purposes of market, opinion and social research - Vocabulary and service requirements
ISO/IEC 19752 Information technology – Method for the determination of toner cartridge yield for monochromatic electrophotographic printers and multi-function devices that contain printer components
ISO/IEC TR 19755:2003 Information technology - Programming languages, their environments and system software interfaces - Object finalization for programming language COBOL
ISO/IEC 19756:2011 Information technology - Topic Maps - Constraint Language (TMCL)
ISO/IEC 19757 Information technology – Document Schema Definition Languages (DSDL)
ISO/IEC 19757-2:2008 Part 2: Regular-grammar-based validation – RELAX NG
ISO/IEC 19757-3:2016 Part 3: Rule-based validation – Schematron
ISO/IEC 19757-4:2006 Part 4: Namespace-based Validation Dispatching Language (NVDL)
ISO/IEC 19757-5:2011 Part 5: Extensible Datatypes
ISO/IEC 19757-7:2009 Part 7: Character Repertoire Description Language (CREPDL)
ISO/IEC 19757-8:2008 Part 8: Document Semantics Renaming Language (DSRL)
ISO/IEC 19757-11:2011 Part 11: Schema association
ISO/IEC TR 19758:2003 Information technology - Document description and processing languages - DSSSL library for complex compositions
ISO/IEC TR 19759:2015 Software Engineering - Guide to the software engineering body of knowledge (SWEBOK)
ISO/IEC 19761:2011 Software engineering - COSMIC: a functional size measurement method
ISO/IEC 19762:2016 Information technology - Automatic identification and data capture (AIDC) techniques - Harmonized vocabulary
ISO/IEC 19763 Information technology - Metamodel framework for interoperability (MFI)
ISO/IEC 19763-1:2015 Part 1: Framework
ISO/IEC 19763-3:2010 Part 3: Metamodel for ontology registration
ISO/IEC 19763-5:2015 Part 5: Metamodel for process model registration
ISO/IEC 19763-6:2015 Part 6: Registry Summary
ISO/IEC 19763-7:2015 Part 7: Metamodel for service model registration
ISO/IEC 19763-8:2015 Part 8: Metamodel for role and goal model registration
ISO/IEC TR 19763-9:2015 Part 9: On demand model selection
ISO/IEC 19763-10:2014 Part 10: Core model and basic mapping
ISO/IEC 19763-12:2015 Part 12: Metamodel for information model registration
ISO/IEC TS 19763-13:2016 Part 13: Metamodel for form design registration
ISO/IEC TR 19764:2005 Information technology – Guidelines, methodology and reference criteria for cultural and linguistic adaptability in information technology products
ISO/IEC TR 19768:2007 Information technology - Programming languages - Technical Report on C++ Library Extensions
ISO/IEC 19770 Information technology – Software asset management
ISO/IEC 19772:2009 Information technology - Security techniques - Authenticated encryption
ISO/IEC 19773:2011 Information technology - Metadata Registries (MDR) modules
ISO/IEC 19774:2006 Information technology - Computer graphics and image processing - Humanoid Animation (H-Anim)
ISO/IEC 19775 Information technology—Computer graphics, image processing and environmental data representation—Extensible 3D (X3D)
ISO/IEC 19775-1:2013 Part 1: Architecture and base components
ISO/IEC 19775-2:2015 Part 2: Scene access interface (SAI)
ISO/IEC 19776 Information technology - Computer graphics, image processing and environmental data representation - Extensible 3D (X3D) encodings
ISO/IEC 19776-1:2015 Part 1: Extensible Markup Language (XML) encoding
ISO/IEC 19776-2:2015 Part 2: Classic VRML encoding
ISO/IEC 19776-3:2015 Part 3: Compressed binary encoding
ISO/IEC 19777 Information technology - Computer graphics and image processing - Extensible 3D (X3D) language bindings
ISO/IEC 19777-1:2006 Part 1: ECMAScript
ISO/IEC 19777-2:2006 Part 2: Java
ISO/IEC 19778 Information technology - Learning, education and training - Collaborative technology - Collaborative workplace
ISO/IEC 19778-1:2015 Part 1: Collaborative workplace data model
ISO/IEC 19778-2:2015 Part 2: Collaborative environment data model
ISO/IEC 19778-3:2015 Part 3: Collaborative group data model
ISO/IEC 19780 Information technology - Learning, education and training - Collaborative technology - Collaborative learning communication
ISO/IEC 19780-1:2015 Part 1: Text-based communication
ISO/IEC TR 19782:2006 Information technology - Automatic identification and data capture techniques - Effects of gloss and low substrate opacity on reading of bar code symbols
ISO/IEC 19784 Information technology – Biometric application programming interface
ISO/IEC 19784-1:2006 Part 1: BioAPI specification
ISO/IEC 19784-2:2007 Part 2: Biometric archive function provider interface
ISO/IEC 19784-4:2011 Part 4: Biometric sensor function provider interface
ISO/IEC 19785 Information technology – Common Biometric Exchange Formats Framework
ISO/IEC 19785-1:2015 Part 1: Data element specification
ISO/IEC 19785-2:2006 Part 2: Procedures for the operation of the Biometric Registration Authority
ISO/IEC 19785-3:2015 Part 3: Patron format specifications
ISO/IEC 19785-4:2010 Part 4: Security block format specifications
ISO/IEC 19788 Information technology – Learning, education and training – Metadata for learning resources
ISO/IEC 19790:2012 Information technology – Security techniques – Security requirements for cryptographic modules
ISO/IEC TR 19791:2010 Information technology - Security techniques - Security assessment of operational systems
ISO/IEC 19792:2009 Information technology - Security techniques - Security evaluation of biometrics
ISO/IEC 19793 Information technology - Open Distributed Processing—Use of UML for ODP system specifications
ISO/IEC 19794 Information technology – Biometric data interchange formats
ISO/IEC 19794-1:2011 Part 1: Framework
ISO/IEC 19794-2:2011 Part 2: Finger minutiae data
ISO/IEC 19794-3:2006 Part 3: Finger pattern spectral data
ISO/IEC 19794-4:2011 Part 4: Finger image data
ISO/IEC 19794-5:2011 Part 5: Face image data
ISO/IEC 19794-6:2011 Part 6: Iris image data
ISO/IEC 19794-7:2014 Part 7: Signature/sign time series data
ISO/IEC 19794-8:2011 Part 8: Finger pattern skeletal data
ISO/IEC 19794-9:2011 Part 9: Vascular image data
ISO/IEC 19794-10:2007 Part 10: Hand geometry silhouette data
ISO/IEC 19794-11:2013 Part 11: Signature/sign processed dynamic data
ISO/IEC 19794-14:2013 Part 14: DNA data
ISO/IEC 19794-15:2017 Part 15: Palm crease image data
ISO/IEC 19795 Information technology – Biometric performance testing and reporting
ISO/IEC 19795-1:2006 Part 1: Principles and framework
ISO/IEC 19795-2:2007 Part 2: Testing methodologies for technology and scenario evaluation
ISO/IEC TR 19795-3:2007 Part 3: Modality-specific testing
ISO/IEC 19795-4:2008 Part 4: Interoperability performance testing
ISO/IEC 19795-5:2011 Part 5: Access control scenario and grading scheme
ISO/IEC 19795-6:2012 Part 6: Testing methodologies for operational evaluation
ISO/IEC 19795-7:2011 Part 7: Testing of on-card biometric comparison algorithms
ISO/IEC 19796 Information technology - Learning, education and training - Quality management, assurance and metrics
ISO/IEC 19796-1:2005 Part 1: General approach
ISO/IEC 19796-3:2009 Part 3: Reference methods and metrics
ISO/TR 19814:2017 Information and documentation - Collections management for archives and libraries
ISO/IEC 19831:2015 Cloud Infrastructure Management Interface (CIMI) Model and RESTful HTTP-based Protocol – An Interface for Managing Cloud Infrastructure
ISO/TR 19838:2016 Microbiology – Cosmetics – Guidelines for the application of ISO standards on Cosmetic Microbiology
ISO/IEC TS 19841:2015 Technical Specification for C++ Extensions for Transactional Memory
ISO/TS 19844:2016 Health informatics – Identification of medicinal products – Implementation guidelines for data elements and structures for the unique identification and exchange of regulated information on substances
ISO/IEC 19845:2015 Information technology - Universal business language version 2.1 (UBL v2.1)
ISO 19859:2016 Gas turbine applications – Requirements for power generation
ISO 19891 Ships and marine technology - Specifications for gas detectors intended for use on board ships
ISO 19891-1:2017 Part 1: Portable gas detectors for atmosphere testing of enclosed spaces
ISO/TR 19948:2016 Earth-moving machinery – Conformity assessment and certification process
ISO 19952:2005 Footwear – Vocabulary
ISO/TS 19979:2014 Ophthalmic optics – Contact lenses – Hygienic management of multipatient use trial contact lenses
ISO 19980:2012 Ophthalmic instruments – Corneal topographers
ISO/IEC 19987:2015 Information technology - EPC Information services - Specification
ISO/IEC 19988:2015 Information technology - GS1 Core Business Vocabulary (CBV)
ISO 19993 Timber structures—Glued laminated timber—Face and edge joint cleavage test
Notes
References
External links
International Organization for Standardization
ISO Certification Provider
ISO Consultant
International Organization for Standardization |
28351265 | https://en.wikipedia.org/wiki/SS%20Ben-my-Chree%20%281927%29 | SS Ben-my-Chree (1927) | TSS (RMS) Ben-my-Chree (IV) No. 145304 – the fourth vessel in the company's history to be so named – was a passenger ferry operated by the Isle of Man Steam Packet Company between 1927 and 1965.
Ben-my-Chree was built in 1927 at the Cammell Laird shipyard, Birkenhead. She was the first steamer built after World War I for the Steam Packet Co.
Upon the ordering of the vessel by the Steam Packet, a contracted cost of £185,000 was agreed. However, early construction was then held up by the long coal strike of 1926. Steel had to be purchased from Continental sources, and her keel was not laid until November of that year.
Dimensions
Ben-my-Chree measured 2,586 GRT; length 355 feet; beam 46 feet; depth 18'6"; speed 22.5–24.5 knots. She was certified for a crew complement of 82, and had a passenger capacity of 2,586.
The first vessel in the history of the line to be constructed as an oil burner, she was fitted with two single-reduction geared turbines by Parson's developing a total shaft horsepower of , with her working boiler pressure at 220 p.s.i.
Pre-war service
Construction of the Ben-my-Chree was plagued by industrial disputes. Her builders were granted extra payments to meet overtime costs, and promised a bonus of £2,000 if they met a delivery date of 25 June 1927. Cammell Laird's met this deadline, with "The Ben launched on 5 April 1927, and completing her trials on 20 June; making her maiden voyage on Wednesday, 29 June.
The Steam Packet Company, very satisfied with the vessel, paid £192,000, including various extras and then agreed to round up the figure to £200,000, which remained the final cost to the company.
Upon entering service, the Ben-my-Chree was widely met with high acclaim; her promenade and shade decks were partially enclosed with glass screening.
She was mainly used on the main home run between Douglas and Liverpool, and in service she regularly averaged over 20 knots between the Head and the Victoria Tower. Her average oil consumption on the route was 18.76 tons over five seasons.
"The Ben, as she was always affectionately known, was also employed on a series of Sunday excursions.
Ben-my-Chree originally entered service in the Steam Packet's traditional black livery. This changed, when she was chartered by the North Lancashire Roman Catholic organization to take passengers to the Eucharistic Conference being held in Dublin in 1932. It was agreed that she was to be painted white with green boot topping, the work was undertaken by Vickers Armstrong for a consideration of £63. However the charter was not taken up because the sleeping accommodation on board the Ben-my-Chree was considered insufficient for the would-be charterers.
The Steam Packet Board decided to retain the colour scheme, as it was believed that white and green would have a definitive advertising value when the vessel was in the Mersey. The Lady of Mann was painted white for the 1933 season and the Mona's Queen was launched as a white ship in 1934.
This scheme proved very popular with passengers and complemented the luxurious interiors of all three ships.
The Ben-my-Chree was the first of three similar vessels built for the company between the wars. The second vessel was the Barrow-built Centenary Steamer, Lady of Mann, and the trio was completed when the Mona's Queen entered service in 1934.
All three ships typified the style and elegance which was associated with the Isle of Man Steam Packet Company during the 1930s, and were highly regarded by both their passengers and crew.
War service
Painted in naval grey in September 1939, the Ben-my-Chree was requisitioned and served as a personnel ship until she was released from service in May 1946.
Under the command of her Master Captain G. Woods, "The Ben served alongside seven of her Steam Packet sisters during Operation Dynamo. She made three trips to Dunkirk, and rescued a total of 4,095 troops from the stricken port. Her only mishap was when she sustained damage after she and another ship collided soon after leaving Folkestone for Dunkirk, and this effectively finished her part in the Dunkirk Evacuation.
Later she was engaged on trooping and transport duties between Iceland, the Faroes and Britain, plying from Greenock and Invergordon, usually in the company of her sister Lady of Mann, until the beginning of 1944. During this stage of her service, she gained a reputation as a very fine sea boat, and was sometimes able to keep her station while naval vessels around her were falling back in heavy weather.
"The Ben made one trip from Skale Fjord in the Faroes to Iceland in May 1941, the week that the British battlecruiser was sunk in the area by the .
Most of her war service had seen her based in Scottish ports which continued until January 1944, when she was moved to North Shields. Ben-my-Chree was then converted to a Landing Ship Infantry (Hand Hoisting) vessel with a carrying capacity of six landing craft assault, and after her conversion she made passage to the Channel to begin her preparations for D-Day.
On D-Day, 6 June 1944, as headquarters ship of the Senior Officer of the 514th Assault Flotilla, Ben-my-Chree and her landing craft saw action off Omaha Beach, landing American troops of the Ranger Assault Group at Pointe du Hoc.
She then continued as a transport, until she was released for reconditioning in May 1946, and returned to Birkenhead in a very poor condition.
Post-war service
Rejoining the Steam Packet Fleet, Ben-my-Chree returned to service with a shortened mainmast, a shipyard strike having prevented the fitting of a normal one.
A further re-fit in the winter of 1946/47 included the shortening of her funnel, the cravat being removed in 1950.
Employed only during the summer season, Ben-my-Chree continued to give reliable service to and from the many ports then in the company's list of destinations. Her remaining life was trouble free, and she continued in service until September 1965.
Her final weekend was a busy one, typical of the Steam Packet schedule of that time. On Friday 10 September, she sailed from Douglas to Ardrossan, then left for Belfast to take a charter to Liverpool overnight. Returning to Belfast from Liverpool on Saturday night, she sailed to Douglas on Sunday morning, and then made her last journey to Liverpool as the 01:00hrs from Douglas on 13 September, with 156 passengers on board.
After a long service she was laid up in Birkenhead awaiting a purchaser, until she was bought by Van Heyghen Freres of Antwerp. Ben-my-Chree was taken under tow by the tug Fairplay XI leaving Birkenhead on Saturday, 18 December. She arrived at Bruges on 23 December, for breaking.
Gallery
References
External links
1927 ships
Ferries of the Isle of Man
Passenger ships of the United Kingdom
Ships built on the River Mersey
Ships of the Isle of Man Steam Packet Company
Steamships of the United Kingdom
Steamships |
877420 | https://en.wikipedia.org/wiki/Psychonauts | Psychonauts | Psychonauts is a 2005 platform video game developed by Double Fine Productions. The game was initially published by Majesco Entertainment for Microsoft Windows, Xbox and PlayStation 2. In 2011, Double Fine acquired the rights for the title, allowing the company to republish the title with updates for modern gaming systems and ports for Mac OS X and Linux.
Psychonauts follows the player-character Razputin (Raz), a young boy gifted with psychic abilities who runs away from the circus to try to sneak into a summer camp for those with similar powers to become a "Psychonaut", a spy with psychic abilities. He finds that there is a sinister plot occurring at the camp that only he can stop. The game is centered on exploring the strange and imaginative minds of various characters that Raz encounters as a Psychonaut-in-training/"Psycadet" to help them overcome their fears or memories of their past, so as to gain their help and progress in the game. Raz gains use of several psychic abilities during the game that are used for both attacking foes and solving puzzles.
Psychonauts was based on an abandoned concept that Tim Schafer had during the development of Full Throttle, which he expanded out into a full game through his then new company Double Fine. The game was initially backed by Microsoft's Ed Fries as a premiere title for the original Xbox console, but several internal and external issues led to difficulties for Double Fine in meeting various milestones and responding to testing feedback. Following Fries' departure in 2004, Microsoft dropped the publishing rights, making the game's future unclear. Double Fine was able to secure Majesco as a publisher a few months later allowing them to complete the game after four and a half years of development.
Despite being well received, Psychonauts did not sell well with only about 400,000 retail units sold at the time of release, leading to severe financial loss for Majesco and their departure from the video game market; the title was considered a commercial failure. Psychonauts since has earned a number of industry awards, gained a cult following and has been considered one of the greatest video games ever made. Following the acquisition of the game, Double Fine's republishing capabilities and support for modern platforms has allowed them to offer the game through digital distribution, and the company has reported that their own sales of the game have far exceeded what was initially sold on its original release, with cumulative sales of nearly 1.7 million . A sequel, Psychonauts 2, was announced at The Game Awards in December 2015 and was released on August 25, 2021.
Gameplay
Psychonauts is a platform game that incorporates various adventure elements. The player controls the main character Raz in a third-person, three-dimensional view, helping Raz to uncover a mystery at the Psychonauts training camp. Raz begins with basic movement abilities such as running and jumping, but as the game progresses, Razputin gains additional psychic powers such as telekinesis, levitation, invisibility, and pyrokinesis. These abilities allow the player to explore more of the camp as well as fight off enemies. These powers can either be awarded by completing certain story missions, gaining psi ranks during the game, or by purchasing them with hidden arrowheads scattered around the camp. Powers can be improved — such as more damaging pyrokinesis or longer periods of invisibility — through gaining additional psi ranks. The player can assign three of these powers to their controller or keyboard for quick use, but all earned powers are available at any time through a selection screen.
The game includes both the "real world" of the camp and its surroundings, as well as a number of "mental worlds" which exist in the consciousness of the game's various characters. The mental worlds have wildly differing art and level design aesthetics, but generally have a specific goal that Raz must complete to help resolve a psychological issue a character may have, allowing the game's plot to progress. Within the mental worlds are censors that react negatively to Raz's presence and will attack him. There are also various collectibles within the mental worlds, including "figments" of the character's imagination which help increase Razputin’s psi ranking, "emotional baggage" which can be sorted by finding tags and bringing them to the baggage, and "memory vaults" which can unlock a short series of slides providing extra information on that character's backstory. Most of these worlds culminate in a boss battle that fully resolves the character's emotional distress and advance the story. The player is able to revisit any of these worlds after completing them to locate any additional collectibles they may have missed. Razputin is given some items early in the game, one that allows him to leave any mental world at any time, and another that can provide hints about what to do next or how to defeat certain enemies.
Raz can take damage from psychically empowered creatures around the camp at night, or by censors in the mental worlds; due to a curse placed on his family, Raz is also vulnerable to water. If Raz’s health is drained, he is respawned at the most-recent checkpoint. However, this can only be done so many times while Raz is within a mental world, indicated by the number of remaining astral projections; if these are expended through respawning, Raz is ejected from the character's mind and must re-enter to make another attempt. Health and additional projections can be collected throughout the levels, or purchased at the camp store.
Plot
Setting
The story is set in the fictional Whispering Rock Psychic Summer Camp, a remote US government training facility under the guise of a children's summer camp. Centuries ago the area was hit by a meteor made of psitanium (a fictional element that can grant psychic powers or strengthen existing powers), creating a huge crater. The psitanium affected the local wildlife, giving them limited psychic powers, such as bears with the ability to attack with telekinetic claws, cougars with pyrokinesis, and rats with confusion gas. The Native Americans of the area called psitanium "whispering rock", which they used to build arrowheads. When settlers began inhabiting the region, the psychoactive properties of the meteor slowly drove them insane. An asylum was built to house the afflicted, but within fifteen years, the asylum had more residents than the town did and the founder Houston Thorny committed suicide by throwing himself from the asylum's tower. The government relocated the remaining inhabitants and flooded the crater to prevent further settlement, creating what is now Lake Oblongata. The asylum still stands, but has fallen into disrepair.
The government took advantage of the psitanium deposit to set up a training camp for Psychonauts, a group of agents gifted with psychic abilities by the psitanium used to help defeat evil-doers. The training ground is disguised as a summer camp for young children, but in reality helps the children to hone their abilities and to train them to be Psychonauts themselves. Due to this, only those recruited by the Psychonauts are allowed into the camp.
Characters
The protagonist and playable character of the game is Razputin Aquato (or Raz for short) (voice actor Richard Horvitz), the son of a family of circus performers, who runs away from the circus to become a Psychonaut, despite his father Augustus' wishes. His family is cursed to die in water, and a large hand attempts to submerge Raz whenever he approaches any significantly deep water. When at camp, Raz meets four of the Psychonauts that run the camp: the cool and calculating Sasha Nein (voice actor Stephen Stanton), the fun-loving Milla Vodello (voice actress Alexis Lezin), the regimental Agent/Coach Morceau Oleander (voice actor Nick Jameson), and the aged, Mark Twainesque Ford Cruller (voice actor David Kaye), said by Razputin to have been the greatest leader the Psychonauts ever had, until a past psychic duel shattered Ford's psyche and left him with dissociative identity disorder. Only when he is near the large concentration of Psitanium does his psyche come together enough to form his real personality. During his time at camp, Raz meets several of the other gifted children including Lili Zanotto (voice actress Nikki Rapp), the daughter of the Grand Head of the Psychonauts, with whom he falls in love; and Dogen Boole, a boy who goes around with a tin foil hat to prevent his abilities from causing anyone's head to explode. Raz also meets ex-residents of the insane asylum including ex-dentist Dr. Caligosto Loboto; as well as Boyd Cooper, a former security guard that holds a number of government conspiracy theories about a person known as "the Milkman"; Fred Bonaparte, an asylum orderly being possessed by the ghost of his ancestor, Napoleon Bonaparte; Gloria Van Gouton, a former actress driven insane by a family tragedy; Edgar Teglee, a painter whose girlfriend cheated on him; and Linda, the gigantic lungfish that protects the asylum from campgoers.
Story
Razputin, having fled from the circus, tries to sneak into the camp, but is caught by the Psychonauts. They agree to let him stay until his parents arrive, but refuse to let him take part in any activities. However, they do allow him to take part in Coach Oleander's "Basic Braining" course, which he easily passes. Impressed, Agent Nein invites Raz to take part in an experiment to determine the extent of his abilities. During the experiment, Raz comes across a vision of Dr. Loboto, an insane ex-dentist, extracting Dogen's brain, but is unable to intervene. Raz eventually realizes that the vision is true after finding Dogen without a brain, but the Psychonauts refuse to believe him. After receiving additional training from Agent Vodello, Raz learns that Dr. Loboto is working on behalf of Coach Oleander, who intends to harvest the campers' brains to power an army of psychic death tanks. Lili is soon abducted as well, and with both Agents Nein and Vodello missing, Raz takes it upon himself to infiltrate the abandoned Thorny Towers Home of the Disturbed insane asylum where she was taken. Agent Cruller gives him a piece of bacon which he can use to contact Agent Cruller at any time, and tasks him with retrieving the stolen brains so that he can return them to the campers.
Raz frees the mutated lungfish Linda from Coach Oleander's control, and she takes him safely across the lake. At the asylum, Raz helps the inmates overcome their illnesses, and they help him access the upper levels of the asylum, where Loboto has set up his lab. He frees Lili and restores Agents Nein and Vodello's minds, allowing them to confront Coach Oleander. The inmates subsequently burn down the asylum, allowing Coach Oleander to transfer his brain to a giant tank. Raz defeats him, but when he approaches the tank, it releases a cloud of sneezing powder, causing him to sneeze his brain out. Raz uses his telekinesis to place his brain inside the tank, merging it with Oleander's. Inside, Raz discovers that Coach Oleander's evil springs from his childhood fear of his father, who ran a butcher shop, being amplified by the psitanium. At the same time, Raz’s own father appears and the two dads join forces. However, he turns out to be an imposter, with Raz’s real father, Augustus, appearing and using his own psychic abilities to fix his son's tangled mind and beat the personal demons. At the camp's closing ceremony, Agent Cruller presents him with a uniform and welcomes him into the Psychonauts. Raz prepares to leave camp with his father, but word arrives that the Grand Head of the Psychonauts—Lili's father, Truman Zanotto—has been kidnapped. Thus Raz and the Psychonauts fly off on their new mission.
Development
Psychonauts was the debut title for Double Fine Productions, a development studio that Tim Schafer founded after leaving LucasArts following their decision to exit the point-and-click adventure game market. Schafer's initial studio hires included several others that worked alongside him on Grim Fandango.
The backstory for Psychonauts was originally conceived during the development of Full Throttle, where Tim Schafer envisioned a sequence where the protagonist Ben goes under a peyote-induced psychedelic experience. While this was eventually ejected from the original game (for not being family friendly enough), Schafer kept the idea and eventually developed it into Psychonauts. While still working at LucasArts, Tim Schafer decided to use the name "Raz" for a main character because he liked the nickname of the LucasArts animator, Razmig "Raz" Mavlian. When Mavlian joined Double Fine, there was increased confusion between the character and the animator. The game's associate producer, Camilla Fossen, suggested the name "Rasputin". As a compromise, Double Fine's lawyer suggested the trademarkable name "Razputin", which was used for the game.
Most of the game's dialog and script was written by Schafer and Erik Wolpaw, who at the time was a columnist for the website Old Man Murray. After establishing the game's main characters, Schafer undertook his own exercise to write out how the characters would see themselves and the other characters' on a social media site similar to Friendster, which Schafer was a fan of at the time and from where he met his wife-to-be. This helped him to solidify the characters in his own head prior to writing the game's dialog, as well as providing a means of introducing the characters to the rest of the development team. To help flesh out character dialog outside of cut scenes, Schafer developed an approach that used dozens of spoken lines by a character that could be stitched together in a random manner by the game as to reduce apparent repetition; such stitching included elements like vocal pauses and coughs that made the dialog sound more natural. Schafer used the camp and woods setting as a natural place that children would want to wander and explore.
The game's mental worlds were generally a result of an idea presented by Schafer to the team, fleshed out through concept art and gameplay concepts around the idea, and then executed into the game with the asset and gameplay developers, so each world had its own unique identity. One of the game's most famous levels is "The Milkman Conspiracy", which takes place in the mind of Boyd, one of the patients at the mental hospital who is obsessed with conspiracy theories. Schafer had been interested in knowing what went on inside the minds of those that believed in conspiracy theories, inspired by watching Capricorn One as a child. During a Double Fine dinner event, someone had uttered the line "I am the milkman, my milk is delicious.", which led Schafer to create the idea of Boyd, a milkman bent on conspiracy theories. Schafer then worked out a web of conspiracy theories, wanting the level to be a maze-like structure around those, tying that in to Boyd's backstory as a person who had been fired from many different jobs, partially inspired by a homeless person that Double Fine occasionally paid to help clean their office front. Schaefer had wanted the 1950s suburban vibe to the level as it would fit in with the spy theme from the same period. Artist Scott Campbell fleshed out these ideas, along with the featureless G-men modeled after the Spy vs. Spy characters. Peter Chan came up with the idea of vaulting the suburban setting into vertical spaces as to create a maze-like effect, which inspired the level designers and gameplay developers to create a level where the local gravity would change for Raz, thus allowing him to move across the warped setting that was created. The level's unique gameplay aspect, where Raz would need to give specific G-men a proper object as in point-and-click adventure games, was from gameplay developer Erik Robson as a means to take advantage of the inventory feature that they had given Raz. Schafer had wanted Wolpaw to write the lines for the G-men, but as he was too busy, Schafer ended up writing these himself.
The art design crew included background artist Peter Chan and cartoonist Scott Campbell. Voice actor Richard Steven Horvitz, best known for his portrayal of Zim in the cult favorite animated series Invader Zim, provides the voice of Raz, the game's protagonist. Initially the team tried to bring in children to provide the voices for the main cast, similar to Peanuts cartoons, but struggled with their lack of acting experience. Schafer had selected Horvitz based on his audition tapes and ability to provide a wide range of vocal intonations on the spot, providing them with numerous takes to work with. Raz was originally conceived as an ostrich suffering from mental imbalance and multiple personalities. Tim Schafer killed the idea because he strongly believes in games being "wish fulfillments," guessing that not many people fantasize about being an insane ostrich.
Double Fine created a number of internal tools and processes to help with the development of the game, as outlined by executive producer Caroline Esmurdoc. With the focus of the game on Raz as the playable character within a platform game, the team created the "Raz Action Status Meeting" (RASM). These were held bi-weekly with each meeting focusing on one specific movement or action that Raz had, reviewing how the character controlled and the visual feedback from that so that the overall combination of moves felt appropriate. With extensive use of the Lua scripting language, they created their own internal Lua Debugger nicknamed Dougie, after a homeless man near their offices they had befriended, that helped to normalize their debugging processes and enable third-party tools to interact with the name. With a large number of planned cutscenes, Double Fine took the time to create a cutscene editor so that the scriptwriters could work directly with the models and environments already created by the programmers without requiring the programmer's direct participation. For level design, though they had initially relied on the idea of simply placing various triggers throughout a level to create an event, the resulting Lua code was large and bulky with potential for future error. They assigned eight of the game programmers to assist the level developers to trim this code, and instituted an internal testing department to overlook the stability of the whole game which had grown beyond what they could do internally. Initially this was formed from unpaid volunteers they solicited on Double Fine's web site, but following the signing of the Majesco publication deal in 2004, they were able to commit full-time staff to this team.
Development and publishing difficulties
Esmurdoc described the development of Psychonauts as difficult due to various setbacks, compounded by the new studio's lack of experience in how to manage those setbacks. The game's initial development began in 2001 during the Dot-com boom. Due to the cost of office space at that time, Double Fine had established an office in an inexpensive warehouse in San Francisco that initially fit their development needs. By 2003, they had come to realize the area they were in was not safe or readily accommodating, slowing down their development. With the collapse of the dot-com bubble, they were able to secure better office space, though this further delayed production. Schafer was also handling many of the duties for both the studio and the development of the game. Though some of the routine business tasks were offloaded to other studio heads, Schafer brought Esmurdoc onto the project in 2004 to help produce the game while he could focus on the creative side.
The intent to allow all developers to have artistic freedom with the game created some internal strife in the team, particularly in the level design; they had initially scoped that level designers would create the basic parts of a level - main paths, scripted events, and the level's general design, while the artists would build out the world from that. As development progressed, they determined that the artists should be the ones constructing the level geometry, which the level designers resented. Subsequently, levels that were generated were not to the expected standards due to conflicts in the toolsets they used and Schafer's inability to oversee the process while handling the other duties of the studio. In 2003, the decision was made to dismiss all but one of the level design team, and unify the level design and art into a World Building team overseen by Erik Robson, the remaining level designer and who would go on to become the game's lead designer; the change, which Esmurdoc stated was for the better, disrupted the other departments at Double Fine.
Psychonauts was to be published by Microsoft for release exclusively on their Xbox console; Schafer attributes this to Microsoft's Ed Fries, who at the time of Psychonautss initial development in 2001, was looking to develop a portfolio of games for the new console system. Schafer believes that Fries was a proponent of "pushing games as art", which helped to solidify Double Fine's concept of Psychonauts as an appropriate title for the console after the team's collected experience of developing for personal computers. However, according to Esmurdoc, Microsoft had also created some milestones that were unclear or difficult to meet, which delayed the development process. She also believes that their own lack of a clear vision of the ultimate product made it difficult to solidify a development and release schedule for the game as well as created confusion with the publisher. Schafer stated that Microsoft also found some of their gameplay decisions to be confusing based on play-testing and requested them to include more instructional information, a common approach for games during the early 2000s, while Schafer and his team felt such confusion was simply the nature of the adventure-based platform that they were developing. Double Fine was also resistant to make changes that Microsoft had suggested from play-testing, such as making the humor secondary to the story, removing the summer camp theme, and drastically altering the story. Fries departed Microsoft in January 2004; shortly thereafter, the company soon pulled the publishing deal for Psychonauts. Esmurdoc said that Microsoft's management considered Double Fine to be "expensive and late", which she agreed had been true but did not reflect on the progress they had been making at this point. Schafer also noted that at the time of Microsoft's cancellation that they were planning on transitioning to the Xbox 360 and were not funding any further development of games that would not be released after 2004; even though Schafer had set an approximate release date in the first quarter of 2005 by this point, Microsoft still opted to cancel. Following this, Schafer and Esmurdoc worked to secure a new publishing deal while using internal funds and careful management to keep the project going. One source of funds that helped keep the company operational came from Will Wright, who had recently sold his company Maxis to Electronic Arts. Prevented from investing into Double Fine by the Maxis deal, he instead provided Double Fine a loan of funds that kept them afloat over the next several months. Wright is credited for this support within the game.
By August 2004, Double Fine had negotiated a new publishing deal with Majesco Entertainment to release the game on Windows as well as the Xbox. Tim Schafer was quoted as saying "Together we are going to make what could conservatively be called the greatest game of all time ever, and I think that's awesome." Though the publishing deal ensured they would be able to continue the development, Esmurdoc stated they had to forgo plans for hiring new developers to meet the scope of the game as agreed to with Majesco. Subsequently, the studio entered, as described by Esmurdoc, "the most insane crunch I have ever witnessed" in order to complete the game. This was compounded when Majesco announced a PlayStation 2 port to be developed by Budcat Creations in October 2004, which further stretched the availability of Double Fine's staff resources. The game went gold in March 2005; Esmurdoc attributes much of the success of this on the solidarity of the development team that kept working towards this point.
Esmurdoc stated that Psychonauts took about 4.5 years to complete — though that without all the complications the real development time was closer to 2 years — with a team of 42 full-time developers and additional contractors, with a final budget of $11.5 million.
Music
The soundtrack to Psychonauts was composed by Peter McConnell, known for his work on LucasArts titles such as Grim Fandango and Day of the Tentacle. Schafer's familiarity with McConnell, having worked with him on numerous projects in the past, led Schafer to select him for the soundtrack composition. The Psychonauts Original Soundtrack, featuring all the in-game music, was released in 2005. The following year, in late 2006, Double Fine released a second soundtrack, Psychonauts Original Cinematic Score, containing music from the game's cutscenes as well as a remix of the main theme and credits.
Release history
The final U.S. release date for the game on Xbox and Windows was April 19, 2005, with the PlayStation 2 port following on June 21, 2005. Psychonauts was re-released via Valve's Steam digital distribution platform on October 11, 2006.
Acquisition of rights
In June 2011, the original publishing deal with Majesco expired, and full publication rights for the game reverted to Double Fine. In September 2011, Double Fine released an updated version for Microsoft Windows and a port to Mac OS X and Linux through Steam. The new version provided support for Steam features including achievements and cloud saving. The Mac OS X port was developed in partnership with Steven Dengler's Dracogen. In conjunction with this release, an iOS application, Psychonauts Vault Viewer!, was released at the same time, featuring the memory vaults from the game with commentary by Tim Schafer and Scott Campbell.
With control of the game's rights, Double Fine was able to offer Psychonauts as part of a Humble Bundle in June 2012. As a result, the game sold well, with Schafer stating that they sold more copies of Psychonauts in the first few hours of the Bundle's start than they had since the release of the retail copy of the game. Later in 2012, Schafer commented that their ability to use digital venues such as Steam that "[Double Fine] made more on Psychonauts this year than we ever have before".
Although initially unplayable on the Xbox 360, Tim Schafer spearheaded a successful e-mail campaign by fans which led to Psychonauts being added to the Xbox 360 backwards compatible list on December 12, 2006,
and on December 4, 2007, Microsoft made Psychonauts one of the initial launch titles made available for direct download on the Xbox 360 through their Xbox Originals program. When Majesco's rights expired, the game was temporarily removed from the service in August 2011, as Microsoft does not allow unpublished content on its Xbox Live Marketplace. Schafer worked with Microsoft to gain their help in publishing the title under the Microsoft Studios name, and the game returned to the Marketplace in February 2012. The game was added to the PlayStation Network store as a "PS2 Classic" for the PlayStation 3 in August 2012.
As part of a deal with Nordic Games, who gained the rights to Costume Quest and Stacking after THQ's bankruptcy, Double Fine took over publishing rights for both games, while Nordic published and distribute retail copies of all three games for Windows and Mac OS X systems.
In 2016, Double Fine also released Psychonauts as a classic title for use with the PlayStation 4's emulation software.
Reception
Psychonauts received positive reception, according to review aggregator Metacritic. Schafer and Wolpaw's comedic writing was highly praised, as well as the uniqueness and quirks that the individual characters were given. Alex Navarro of GameSpot commented favorably on the "bizarre" cast of characters, their conversations that the player can overhear while exploring the camp, and how these conversations will change as the story progresses, eliminating repetition that is typical of such non-player characters in platform games. Tom Bramwell of Eurogamer found that he was incentivized to go back and explore or experiment in the game's level to find more of the comedic dialog that others had observed. The game was also noted for its innovations, such as the use of a second-person perspective during a boss battle.
The game's art and level design (in particular, the designs of the various mental worlds that Raz visits) were well-received Jason Hill of the Sydney Morning Herald stated that each of the dream worlds "is a memorable journey through the bizarre inner psyche" of the associated character. Two particular levels have been considered iconic of the game's humor and style: the aforementioned Milkman Conspiracy, and Lungfishopolis, where Raz enters the mind of a lungfish monster that lives near camp; in the lungfish's mind Raz is portrayed as a giant monster akin to Godzilla that is attacking the tiny lungfish citizens of Lungfishopolis, effectively creating an absurd role reversal of the typical giant monster formula.
The overall game structure has been a point of criticism. Some reviewers identified that the first several hours of the game are focused on tutorials and instruction, and are less interesting than the later mental worlds. The game's final level, the "Meat Circus", was also considered unexpectedly difficult when compared to earlier sections of the game, featuring a time limit and many obstacles that required an unusual level of precision. On its re-release in 2011, Double Fine adjusted the difficulty of this level to address these complaints. Some found that the game's humor started to wane or become predictable in the latter part of the game.
GamingOnLinux reviewer Hamish Paul Wilson gave the game 8/10, praising the game's creativity and presentation, but also criticizing several other areas of the game, including the large number of unaddressed bugs. Wilson concluded that "Psychonauts has to be viewed as a flawed masterpiece". In 2010, the game was included as one of the titles in the book 1001 Video Games You Must Play Before You Die.
Awards
E3 2005 Game Critics Awards: Best Original Game
British Academy Video Games Awards 2006: Best Screenplay
The editors of Computer Games Magazine presented Psychonauts with their 2005 awards for "Best Art Direction" and "Best Writing", and named it the year's tenth-best computer game. They called the game "a wonderfully weird journey high on atmosphere, art direction, and creativity." Psychonauts won PC Gamer USs 2005 "Best Game You Didn't Play" award. The editors wrote, "Okay, look, we gave it an Editors' Choice award — that's your cue to run out right now and buy Tim Schafer's magnificent action/adventure game. So far, only about 12,000 PC gamers have." It was also a nominee for the magazine's "Game of the Year 2005" award, which ultimately went to Battlefield 2. Psychonauts also won the award for Best Writing at the 6th Annual Game Developers Choice Awards.
Sales
Despite Psychonauts earning high critical praise and a number of awards, it was a commercial failure upon its initial release. Although the game was first cited as the primary contributing factor to a strong quarter immediately following its launch, a month later Majesco revised their fiscal year projections from a net profit of $18 million to a net loss of $18 million, and at the same time its CEO, Carl Yankowski, announced his immediate resignation. By the end of the year, the title had shipped fewer than 100,000 copies in North America, and Majesco announced its plans to withdraw from the "big budget console game marketplace". Schafer stated that by March 2012 the retail version Psychonauts had sold 400,000 copies.
Following Double Fine's acquisition of the rights, they were able to offer the game on more digital storefronts and expand to other platforms; as previously described, this allowed the company to achieve sales in a short term far in excess of what they had been prior to obtaining the rights. In August 2015, Steam Spy estimates approximately 1,157,000 owners of the game on the digital distributor Steam alone. In the announcement for Psychonauts 2 in December 2015, Schafer indicated that Psychonauts sold nearly 1.7 million copies, with more than 1.2 million occurring after Double Fine's acquisition of the rights. Double Fine lists 736,119 sold copies via the Humble Bundle (including a Steam key), 430,141 copies via the Steam storefront, 32,000 GOG.com copies, and 23,368 Humble Store copies.
Legacy
Sequels
A sequel to Psychonauts has been of great interest to Schafer, as well as to fans of the game and the gaming press. Schafer had pitched the idea to publishers but most felt the game too strange to take up. During the Kickstarter campaign for Double Fine's Broken Age in February 2012, Schafer commented on the development costs of a sequel over social media, leading to a potential interest in backing by Markus Persson, at the time the owner of Mojang. Though Persson ultimately did not fund this, interactions between him and Double Fine revealed the possibility of several interested investors to help.
In mid-2015, Schafer along with other industry leaders launched Fig, a crowd-sourced platform for video games that included the option for accredited investors to invest in the offered campaigns. Later, at the 2015 Game Awards in December, Schafer announced their plans to work on Psychonauts 2, using Fig to raise the $3.3 million needed to complete the game, with an anticipated release in 2018. The campaign succeeded on January 6, 2016. The sequel was released on August 25, 2021 and sees the return of Richard Horvitz and Nikki Rapp as the voices of Raz and Lili respectively, along with Wolpaw for writing, Chan and Campbell for art, and McConnell for music.
Additionally, Double Fine has developed a VR title called Psychonauts in the Rhombus of Ruin for use on Oculus Rift, HTC Vive, and PlayStation VR. Released in 2017, it serves as a standalone chapter to tie the original game and its sequel, based on Raz and the other Psychonauts rescuing Truman Zanotto.
Appearance in other media
The character Raz has made appearances in other Double Fine games, including as a massive Mount Rushmore-like mountain sculpture in Brütal Legend, and on a cardboard cutout within Costume Quest 2. Raz also appeared in a downloadable content package as a playable character for Bit.Trip Presents... Runner2: Future Legend of Rhythm Alien. A cameo of Raz appears in Alice: Madness Returns which can be found at the Red Queen's castle as a propped-up skeleton that bears a striking resemblance to the protagonist itself.
Notes
References
External links
2005 video games
3D platform games
Double Fine games
Fictional psychics
Fictional representations of Romani people
Linux games
Lua (programming language)-scripted video games
MacOS games
Majesco Entertainment games
Microsoft franchises
PlayStation 2 games
Psychonauts
THQ games
Video games about psychic powers
Video games developed in the United States
Video games scored by Peter McConnell
Video games set in psychiatric hospitals
Video games set in the United States
Windows games
Xbox games
Xbox Originals games |
8902443 | https://en.wikipedia.org/wiki/Sparkle%20%28software%29 | Sparkle (software) | Sparkle is an open-source software framework for macOS designed to simplify updating software for the end user of a program. Sparkle's primary means of distributing updates is through "appcasting," a term coined for the practice of using an RSS enclosure to distribute updates and release notes.
At the end of 2013, development of Sparkle was ended by the original author, then later picked up by the newly formed Sparkle Project open source group on GitHub in June 2014 as the official continuation of the project.
Other OS alternatives
There are several open source Windows alternatives to offer similar functionality to Sparkle:
wyUpdate (BSD licensed) in tandem with the AutomaticUpdater (LGPL licensed)
WinSparkle (MIT licensed)
There is also a REALbasic implementation of Sparkle that works on macOS, Windows and Linux: RBSparkle.
References
External links
Sparkle homepage
Sparkle development page at GitHub
Sparkle at MacUpdate
MacOS programming tools
Patch utilities
Software using the MIT license |
24130874 | https://en.wikipedia.org/wiki/Russian%20Fedora%20Remix | Russian Fedora Remix | Russian Fedora Remix was a remix of the Fedora Linux Linux distribution adapted for Russia that was active in 2008–2019. It was neither a copy of the original Fedora nor a new Linux distribution. The project aimed to ensure that Fedora fully satisfied the needs of Russian users with many additional features provided out of the box (e.g., specific software packages, preinstalled drivers for popular graphics processors, manuals in Russian). In autumn 2019 the project was phased out because its leaders announced that it "had fulfilled its purpose by 100%" and all of the Russian-centric improvements were officially included in Fedora repositories, and Russian Fedora software maintainers became regular Fedora maintainers.
History
The project was originally established by Arkady "Tigro" Shain under the name Tedora. The main inspiration for this was Fedora 9 being very inconvenient for Russian users with a bug impeding successful installation when the packages were customized.
The project's official status was announced at a conference held in the National Research Nuclear University MEPhI (Moscow Engineering Physics Institute) on 20 November 2008. That day Tedora merged into the newly established Russian Fedora founded by Fedora Project, Red Hat, VDEL, and VNIINS. The latter is now the project's technological center.
Starting with version 11, the project name was changed to Russian Fedora Remix to comply with Fedora's regulations regarding use of its trademark.
The project's logo was established on 10 March 2010.
Releases
New versions were planned to be released simultaneously with Fedora ones.
Tedora 9
The following are the general differences from Fedora 9:
The first release of Fedora 9 contained the bug which impeded the successful installation in the Russian language if the packages were customized. This problem was due to an error in the Russian translation of the Fedora installer Anaconda. The error also occurred during the installation in the text mode after the packages had been selected. Both bugs were fixed in Tedora.
SELinux was disabled by default.
Support introduced for ReiserFS, ext4, and Journaled File System (JFS).
Tedora was distributed with the patched loader Grub to allow booting the system installed on ext4.
The installation disk included GNOME Desktop Environment, K Desktop Environment, XFCE Desktop Environment and IceWM Window Manager.
The repositories Fedora Updates, Livna, Tigro and Tigro Non-Free were used.
During the network installation the Tedora repository should have been used instead of the Fedora repository in order to receive the packages. This was due to the changed name of the distribution. For other repositories the tick in the package selection window was sufficient.
The font loader was added to the file /etc/rc.sysinit which solved the problem with the incorrect rendering of the starting phrase "udev".
Only European languages were included on the installation disk.
The keyboard layout "English (US)" is default for the Russian and Ukrainian versions. This allowed the easy creation of a new user profile during the first start of the system.
The packages could be installed directly from the installation DVD.
The keys of the Livna and Tigro repositories were automatically imported to PackageKit during the installation.
There were many programs in Tedora which were not included on the original Fedora DVD. Some notable ones are: XFCE desktop environment; window managers IceWM and Fluxbox; full support of mp3, DVD, DivX and other US-problematic codecs; Flash-plugin which worked "out of the box" even under x86-64; Opera browser; VLC player; Compiz Fusion and Nvidia drivers.
In Tedora all fonts were rendered as they should. Some additional TrueType fonts were also added.
Russian Fedora 10
Russian Fedora 10 was released on 25 November 2008.
The following are the main differences from Fedora 10:
Support of all popular audio and video codecs. Many proprietary video card drivers were also supported.
XFCE, LXDE and IceWM were available from the installation medium.
SELinux was set to the Permissive mode by default.
RPM Fusion and Tigro repositories were used by default.
Different base installation modes were added: GNOME Desktop, KDE Desktop, XFCE Desktop, etc..
Package installation from the medium.
KDM was used instead of GDM in case of KDE being the only installed desktop environment.
Russian Fedora 10.1
Russian Fedora 10.1 was released on 24 February 2009.
Improvements:
Problems when switching the keyboard layout were fixed. Layout indicators were added to GNOME and KDE.
PackageKit allowed to install/uninstall programs from the installation disk without the internet access.
Folders were opened in the same window in Nautilus.
Accelerators of the GNOME Terminal menu were disabled.
The Tigro repository had been replaced with the Russian Fedora repository.
System installation bugs were fixed.
Russian Fedora 10.2
Russian Fedora 10.2 was released on 14 May 2009. The differences from the previous release are updated software and bug fixes.
Russian Fedora Remix 11
Russian Fedora Remix 11 was released on the same day as Fedora 11, 9 June 2009. The distribution was available on various media: installation DVD, LiveCD (KDE, GNOME, or Xfce) and LiveDVD (KDE, GNOME, Xfce, and LXDE). Two architectures were supported: P5 (i586) and x86-64.
Differences from Fedora 11:
The installation DVD contained only languages used in Europe and the Post-Soviet states.
Many keyboard layout switching improvements.
Multimedia codecs, network adapter drivers and NVidia graphic card drivers were added.
Russian Fedora Remix 12
Russian Fedora Remix 12 was released on 17 November 2009. As a result of the adoption of the new compression algorithm (XZ, the new LZMA format) the installation DVD contained more packages compared to previous versions. All languages of the original Fedora were included on this DVD.
Russian Fedora Remix 13
The release of RFRemix 13 came out on 25 May 2010.
Apart from the usual set of changes like added multimedia codecs or additional desktop environments, RFRemix 13 has introduced the following features into Fedora 13 (only notable ones are listed):
Firstboot contains the special screen for changing some system preferences, for example, disabling IPv6, enabling ctrl+alt+backspace, choosing the login manager, and others.
The feature of setting up different key combinations for switching the keyboard layout for the Russian language.
Use of Firefox 3.6.4 pre build4 which is believed more stable than version 3.6.4.
SELinux set by default to Enforcing mode.
Updated Russian Fedora logos.
Fedora Remix 20
This 13 December 2013 remix adds applications to the Fedora20 Distributions (32-bit and 64-bit versions). Included are a moderate collection of applications for flash music, application development and more.
Fedora Remix 27
The latest version of the Remix is 27, with a beta corresponding to Fedora 28 beta. Shortly expect to see Remix 28 about the time that Fedora 28 is released.
Like regular Fedora, it offers the Gnome and KDE Plasma desktop environments. It includes software that is useful for the desktop, programming, gaming, server use, and more.
Fedora Remix 28
This version was available 2 days after the regular Fedora 28 release.
Fedora Remix 29
This version was available 2 days following the regular Fedora 29 release. This version included everything that was provided by Fedora 29. Proprietary media codecs needed to watch videos or listen to podcasts were included.
See also
Red Hat
References
External links
,
Fedora Project
RPM-based Linux distributions
X86-64 Linux distributions
Linux distributions |
2107602 | https://en.wikipedia.org/wiki/Information%20wants%20to%20be%20free | Information wants to be free | "Information wants to be free" is an expression that means all people should be able to access information freely. It is often used by technology activists to criticize laws that limit transparency and general access to information. People who criticize intellectual property law say the system of such government-granted monopolies conflicts with the development of a public domain of information. The expression is often credited to Stewart Brand, who was recorded saying it at a hackers conference in 1984.
History
The phrase is attributed to Stewart Brand, who, in the late 1960s, founded the Whole Earth Catalog and argued that technology could be liberating rather than oppressing. What is considered the earliest recorded occurrence of the expression was at the first Hackers Conference in 1984, although the video recording of the conversation shows that what Brand actually said is slightly different. Brand told Steve Wozniak:
Brand's conference remarks are transcribed accurately by Joshua Gans in his research on the quote as used by Steve Levy in his own history of the phrase.
A later form appears in his The Media Lab: Inventing the Future at MIT:
According to historian Adrian Johns, the slogan expresses a view that had already been articulated in the mid-20th century by Norbert Wiener, Michael Polanyi and Arnold Plant, who advocated for the free communication of scientific knowledge, and specifically criticized the patent system.
Gratis versus libre
The various forms of the original statement are ambiguous: the slogan can be used to argue the benefits of propertied information, of liberated, free, and open information, or of both. It can be taken amorally as an expression of a fact of information-science: once information has passed to a new location outside of the source's control there is no way of ensuring it is not propagated further, and therefore will naturally tend towards a state where that information is widely distributed. Much of its force is due to the anthropomorphic metaphor that imputes desire to information. In 1990 Richard Stallman restated the concept normatively, without the anthropomorphization:
Stallman's reformulation incorporates a political stance into Brand's value-neutral observation of social trends.
Cypherpunk
Brand's attribution of will to an abstract human construct (information) has been adopted within a branch of the cypherpunk movement, whose members espouse a particular political viewpoint (anarchism). The construction of the statement takes its meaning beyond the simple judgmental observation, "Information should be free", by acknowledging that the internal force or entelechy of information and knowledge makes it essentially incompatible with notions of proprietary software, copyrights, patents, subscription services, etc. They believe that information is dynamic, ever-growing and evolving and cannot be contained within (any) ideological structure.
According to this philosophy, hackers, crackers, and phreakers are liberators of information which is being held hostage by agents demanding money for its release. Other participants in this network include cypherpunks who educate people to use public-key cryptography to protect the privacy of their messages from corporate or governmental snooping and programmers who write free software and open source code. Still others create Free-Nets allowing users to gain access to computer resources for which they would otherwise need an account. They might also break copyright law by swapping music, movies, or other copyrighted materials over the Internet.
Chelsea Manning is alleged to have said "Information should be free" to Adrian Lamo when explaining a rationale for US government documents to be released to WikiLeaks. The narrative goes on with Manning wondering if she is a "'hacker', 'cracker', 'hacktivist', 'leaker' or what".
Literary usage
In the "Fall Revolution" series of science-fiction books, author Ken Macleod riffs and puns on the expression by writing about entities composed of information actually "wanting", as in desiring, freedom and the machinations of several human characters with differing political and ideological agendas, to facilitate or disrupt these entities' quest for freedom.
In the cyberpunk world of post-singularity transhuman culture described by Charles Stross in his books like Accelerando and Singularity Sky, the wish of information to be free is a law of nature.
See also
Crypto-anarchism
Culture vs. Copyright
Cypherpunk
Free content
Free culture movement
Freedom of information
Free Haven Project
Freenet
Free software
Hacktivism
Hacktivismo
Horror vacui (physics)
Information activist
Information Doesn't Want to Be Free
Internet censorship
Internet privacy
Openness
Streisand effect
Tor (anonymity network)
Transparency
References
External links
Does the cyberpunk movement represent a political resistance?
Roger Clarke
Adages
English phrases
Free content
Open content
Quotations from science
1984 neologisms |
527617 | https://en.wikipedia.org/wiki/Chess%20engine | Chess engine | In computer chess, a chess engine is a computer program that analyzes chess or chess variant positions, and generates a move or list of moves that it regards as strongest.
A chess engine is usually a back end with a command-line interface with no graphics or windowing. Engines are usually used with a front end, a windowed graphical user interface such as Chessbase or WinBoard that the user can interact with via a keyboard, mouse or touchscreen. This allows the user to play against multiple engines without learning a new user interface for each, and allows different engines to play against each other.
Many chess engines are now available for mobile phones and tablets, making them even more accessible.
History
The meaning of the term "chess engine" has evolved over time. In 1986, Linda and Tony Scherzer entered their program Bebe into the 4th World Computer Chess Championship, running it on "Chess Engine," their brand name for the chess computer hardware made, and marketed by their company Sys-10, Inc. By 1990 the developers of Deep Blue, Feng-hsiung Hsu and Murray Campbell, were writing of giving their program a 'searching engine,' apparently referring to the software rather than the hardware. In December 1991, Computer-schach & Spiele referred to Chessbase's recently released Fritz as a 'Schach-motor,' the German translation for 'chess engine. By early 1993, Marty Hirsch was drawing a distinction between commercial chess programs such as Chessmaster 3000 or Battle Chess on the one hand, and 'chess engines' such as ChessGenius or his own MChess Pro on the other. In his characterization, commercial chess programs were low in price, had fancy graphics, but did not place high on the SSDF (Swedish Chess Computer Association) rating lists while engines were more expensive, and did have high ratings.
In 1994, Shay Bushinsky was working on an early version of his Junior program. He wanted to focus on the chess playing part rather than the graphics, and so asked Tim Mann how he could get Junior to communicate with Winboard. Tim's answer formed the basis for what became known as the Chess Engine Communication Protocol or Winboard engines, originally a subset of the GNU Chess command line interface.
Also in 1994, Stephen J. Edwards released the Portable Game Notation (PGN) specification. It mentions PGN reading programs not needing to have a "full chess engine." It also mentions three "graphical user interfaces" (GUI): XBoard, pgnRead and Slappy the database.
By the mid-2000s, engines had become so strong that they were able to beat even the best human players. In 2005, Michael Adams, a world top 10 player at the time, was comprehensively beaten 5½ - ½ by Hydra, drawing only one of the six games. Matches between humans and engines are now rare; engines are increasingly regarded as tools for analysis rather than as opponents.
Interface protocol
Common Winboard engines would include Crafty, ProDeo (based on Rebel), Chenard, Zarkov and Phalanx.
In 1995, Chessbase released a version of their database program including Fritz 4 as a separate engine. This was the first appearance of the Chessbase protocol. Soon after, they added the engines Junior and Shredder to their product line up, including engines in CB protocol as separate programs which could be installed in the Chessbase program or one of the other Fritz style GUI's. Fritz 1-14 were only issued as Chessbase engines, while Hiarcs, Nimzo, Chess Tiger and Crafty have been ported to Chessbase format even though they were UCI or Winboard engines. Recently, Chessbase has begun to include Universal Chess Interface (UCI) engines in their playing programs such as Komodo, Houdini, Fritz 15–16 and Rybka rather than convert them to Chessbase engines.
In 2000, Stefan Meyer-Kahlen and Franz Huber released the Universal Chess Interface, a more detailed protocol that introduced a wider set of features. Chessbase soon after dropped support for Winboard engines, and added support for UCI to their engine GUI's and Chessbase programs. Most of the top engines are UCI these days: Stockfish, Komodo, Leela Chess Zero, Houdini, Fritz 15-16, Rybka, Shredder, Fruit, Critter, Ivanhoe and Ruffian.
From 1998, the German company Millenium 2000 briefly moved from dedicated chess computers into the software market, developing the Millennium Chess System (MCS) protocol for a series of CD's containing ChessGenius or Shredder, but after 2001 ceased releasing new software. A more longstanding engine protocol has been used by the Dutch company, Lokasoft, which eventually took over the marketing of Ed Schröder's Rebel.
Increasing strength
Chess engines increase in playing strength continually. This is partly due to the increase in processing power that enables calculations to be made to ever greater depths in a given time. In addition, programming techniques have improved, enabling the engines to be more selective in the lines that they analyze and to acquire a better positional understanding. A chess engine often uses a vast previously computed opening "book" to increase its playing strength for the first several moves, up to possibly 20 moves or more in deeply analyzed lines.
Some chess engines maintain a database of chess positions, along with previously computed evaluations and best moves, in effect, a kind of "dictionary" of recurring chess positions. Since these positions are pre-computed, the engine merely plays one of the indicated moves in the database, thereby saving computing time, resulting in stronger, faster play.
Some chess engines use endgame tablebases to increase their playing strength during the endgame. An endgame tablebase includes all possible endgame positions with a small amount of material. Each position is conclusively determined as a win, loss, or draw for the player whose turn it is to move, and the number of moves to the end with best play by both sides. The tablebase identifies for every position the move which will win the fastest against an optimal defense, or the move that will lose the slowest against an optimal offense. Such tablebases are available for all chess endgames with seven pieces or fewer (trivial endgame positions are excluded, such as six white pieces versus a lone black king).
When the maneuvering in an ending to achieve an irreversible improvement takes more moves than the horizon of calculation of a chess engine, an engine is not guaranteed to find the best move without the use of an endgame tablebase, and in many cases can fall foul of the fifty-move rule as a result. Many engines use permanent brain (continuing to calculate during the opponent's turn) as a method to increase their strength.
Distributed computing is also used to improve the software code of chess engines. In 2013, the developers of the Stockfish chess playing program started using distributed computing to make improvements in the software code. , a total of more than 745 years of CPU time has been used to play more than 485 million chess games, with the results being used to make small and incremental improvements to the chess-playing software. In 2019, Ethereal author Andrew Grant started the distributed computing testing framework OpenBench, based upon Stockfish's testing framework, and it is now the most widely-used testing framework for chess engines.
Limiting an engine's strength
By the late 1990s, the top engines had become so strong that few players stood a chance of winning a game against them. To give players more of a chance, engines began to include settings to adjust or limit their strength. In 2000, when Stefan Meyer-Kahlen and Franz Huber released the Universal Chess Interface protocol they included the parameters uci_limitstrength and uci_elo allowing engine authors to offer a variety of levels rated in accordance with Elo rating, as calibrated by one of the rating lists. Most GUIs for UCI engines allow users to set this Elo rating within the menus. Even engines that have not adopted this parameter will sometimes have an adjustable strength parameter (e.g. Stockfish 11). Engines which have a uci_elo parameter include Houdini, Fritz 15–16, Rybka, Shredder, Hiarcs, Junior, Zappa and Sjeng. GUI's such as Shredder, Chess Assistant, Convekta Aquarium, Hiarcs Chess Explorer or Martin Blume's Arena have dropdown menus for setting the engine's uci_elo parameter. The Fritz family GUI's, Chess Assistant and Aquarium also have independent means of limiting an engine's strength apparently based on an engine's ability to generate ranked lists of moves (called multipv for 'principle variation').
Comparisons
Tournaments
The results of computer tournaments give one view of the relative strengths of chess engines. However, tournaments do not play a statistically significant number of games for accurate strength determination. In fact, the number of games that need to be played between fairly evenly matched engines, in order to achieve significance, runs into the thousands and is, therefore, impractical within the framework of a tournament. Most tournaments also allow any types of hardware, so only engine/hardware combinations are being compared.
Historically, commercial programs have been the strongest engines. If an amateur engine wins a tournament or otherwise performs well (for example, Zappa in 2005), then it is quickly commercialized. Titles gained in these tournaments garner much prestige for the winning programs, and are thus used for marketing purposes. However, after the rise of volunteer distributed computing projects such as Leela Chess Zero and Stockfish and testing frameworks such as FishTest and OpenBench in the late 2010s, free and open source programs have largely displaced commercial programs as the strongest engines in tournaments.
Current tournaments include:
Top Chess Engine Championship (TCEC)
World Computer Chess Championship (WCCC)
World Computer Speed Chess Championship
Computer Chess Championship (CCC) by Chess.com
Historic tournaments include:
Dutch Open Computer Chess Championship
Internet Computer Chess Tournament (CCT)
International Paderborn Computer Chess Championship
North American Computer Chess Championship
Ratings
Chess engine rating lists aim to provide statistically significant measures of relative engine strength. These lists play multiple games between engines on standard hardware platforms, so that processor differences are factored out. Some also standardize the opening books, in an attempt to measure the strength differences of the engines only. These lists provide not only a ranking, but also margins of error on the given ratings.
There are a number of factors that vary among the chess engine rating lists:
Formulae used to calculate the elo of each engine.
Time control. Longer time controls, such as 40 moves in 120 minutes, are better suited for determining tournament play strength, but also make testing more time-consuming.
Opponents used in testing engines. Some rating lists only test an engine against the most recent version of each opponent engine, while other rating lists test an engine against the version(s) of each opponent engine closest in elo to the engine being tested.
Hardware used:
Faster hardware with more memory leads to stronger play.
64-bit (vs. 32-bit) hardware and operating systems favor bitboard-based programs
Hardware using modern instruction sets such as AVX2 or AVX512 favor engines using vectors and vector intrinsics in their code, common in neural networks.
Graphics processing units favor programs with deep neural networks.
Multiprocessor vs. single processor hardware.
Ponder settings (speculative analysis while the opponent is thinking) aka Permanent Brain.
Transposition table sizes.
Opening book settings.
These differences affect the results, and make direct comparisons between rating lists difficult.
Note that while all the listings in the above table count the best entry for a given engine family to provide maximum diversity, the numbers given in the Engine/platform entries column counts the total number of engines, with multiple engine per engine family.
These ratings, although calculated by using the Elo system (or similar rating methods), have no direct relation to FIDE Elo ratings or to other chess federation ratings of human players. Except for some man versus machine games which the SSDF had organized many years ago (when engines were far from today's strength), there is no calibration between any of these rating lists and player pools. Hence, the results which matter are the ranks and the differences between the ratings, and not the absolute values. Also, each list calibrates their Elo via a different method. Therefore, no Elo comparisons can be made between the lists.
Missing from many rating lists are IPPOLIT and its derivatives. Although very strong and open source, there are allegations from commercial software interests that they were derived from a disassembled binary of Rybka. Due to the controversy, all these engines have been blacklisted from many tournaments and rating lists. Rybka in turn was accused of being based on Fruit, and in June 2011, the ICGA formally claimed Rybka was derived from Fruit and Crafty and banned Rybka from the International Computer Games Association World Computer Chess Championship, and revoked its previous victories (2007, 2008, 2009, and 2010). The ICGA received some criticism for this decision. Despite all this, Rybka is still included on many rating lists, such as CCRL and CEGT, in addition to Houdini, a derivative of the IPPOLIT derivative Robbolito, and Fire, a derivative of Houdini. In addition, Fat Fritz 2, a derivative of Stockfish, is also included on most of the rating lists.
Test suites
Engines can be tested by measuring their performance on specific positions. Typical is the use of test suites where for each given position there is one best move to find. These positions can be geared towards positional, tactical or endgame play. The Nolot test suite, for instance, focuses on deep sacrifices. The BT2450 and BT2630 test suites measure the tactical capability of a chess engine and have been used by REBEL. There is also a general test suite called Brilliancy which was compiled mostly from How to Reassess Your Chess Workbook. The Strategic Test Suite (STS) tests an engine's strategical strength. Another modern test suite is Nightmare II which contains 30 chess puzzles.
Kasparov versus the World (chess game played with computer assistance)
In 1999, Garry Kasparov played a chess game "Kasparov versus the World" over the Internet, hosted by the MSN Gaming Zone. Both sides used computer (chess engine) assistance. The "World Team" included the participation of over 50,000 people from more than 75 countries, deciding their moves by plurality vote. The game lasted four months, ending after Kasparov's 62nd move when he announced a forced checkmate in 28 moves found with the computer program Deep Junior. The World Team voters resigned on October 22. After the game, Kasparov said: "It is the greatest game in the history of chess. The sheer number of ideas, the complexity, and the contribution it has made to chess make it the most important game ever played."
Engines for chess variants
Some chess engines have been developed to play chess variants, adding the necessary code to simulate non-standard chess pieces, or to analyze play on non-standard boards. ChessV and Fairy-Max, for example, are both capable of playing variants on a chessboard up to 12×8 in size, such as Capablanca Chess (10×8 board).
For larger boards, however, there are few chess engines that can play effectively, and indeed chess games played on an unbounded chessboard (infinite chess) are virtually untouched by chess-playing software, although theoretically a program using a MuZero-derived algorithm could handle an unbounded state space.
Graphical user interfaces
Xboard/Winboard was one of the earliest graphical user interfaces (GUI). Tim Mann created it to provide a GUI for the GNU Chess engine, but after that, other engines such as Crafty appeared which used the Winboard protocol. Eventually, the program Chessmaster included the option to import other Winboard engines in addition to the King engine which was included.
In 1995, Chessbase began offering the Fritz engine as a separate program within the Chessbase database program and within the Fritz GUI. Soon after, they added the Junior and Shredder engines to their product line up, packaging them within the same GUI as was used for Fritz. In the late 1990s, the Fritz GUI was able to run Winboard engines via an adapter, but after 2000, Chessbase simply added support for UCI engines, and no longer invested much effort in Winboard.
In 2000, Stefan Meyer-Kahlen started selling Shredder in a separate UCI GUI of his own design, allowing UCI or Winboard engines to be imported into it.
Convekta's Chess Assistant and Lokasoft's ChessPartner also added the ability to import Winboard and UCI engines into their products. Shane Hudson developed Shane's Chess Information Database, a free GUI for Linux, Mac and Windows. Martin Blume developed Arena, another free GUI for Linux and Windows. Lucas Monge entered the field with the free Lucas Chess GUI. All three can handle both UCI and Winboard engines.
On Android, Aart Bik came out with Chess for Android, another free GUI, and Gerhard Kalab's Chess PGN Master and Peter Osterlund's Droidfish can also serve as GUIs for engines.
The Computer Chess Wiki lists many chess GUIs.
See also
Chess variants
Computer chess
Correspondence chess
Internet chess server
List of chess software
Notes
References
External links
Chess Engine's Polyglot Opening Book for WinBoard GUI - A general (learning) purpose Chess Engine's Polyglot Opening Book for WinBoard GUI.
Chess Programming Wiki
Computer chess |
623831 | https://en.wikipedia.org/wiki/Eval | Eval | In some programming languages, eval , short for the English evaluate, is a function which evaluates a string as though it were an expression in the language, and returns a result; in others, it executes multiple lines of code as though they had been included instead of the line including the eval. The input to eval is not necessarily a string; it may be structured representation of code, such as an abstract syntax tree (like Lisp forms), or of special type such as code (as in Python). The analog for a statement is exec, which executes a string (or code in other format) as if it were a statement; in some languages, such as Python, both are present, while in other languages only one of either eval or exec is.
Eval and apply are instances of meta-circular evaluators, interpreters of a language that can be invoked within the language itself.
Security risks
Using eval with data from an untrusted source may introduce security vulnerabilities. For instance, assuming that the get_data() function gets data from the Internet, this Python code is insecure:
session['authenticated'] = False
data = get_data()
foo = eval(data)
An attacker could supply the program with the string "session.update(authenticated=True)" as data, which would update the session dictionary to set an authenticated key to be True. To remedy this, all data which will be used with eval must be escaped, or it must be run without access to potentially harmful functions.
Implementation
In interpreted languages, eval is almost always implemented with the same interpreter as normal code. In compiled languages, the same compiler used to compile programs may be embedded in programs using the eval function; separate interpreters are sometimes used, though this results in code duplication.
Programming languages
ECMAScript
JavaScript
In JavaScript, eval is something of a hybrid between an expression evaluator and a statement executor. It returns the result of the last expression evaluated.
Example as an expression evaluator:
foo = 2;
alert(eval('foo + 2'));
Example as a statement executor:
foo = 2;
eval('foo = foo + 2;alert(foo);');
One use of JavaScript's eval is to parse JSON text, perhaps as part of an Ajax framework. However, modern browsers provide JSON.parse as a more secure alternative for this task.
ActionScript
In ActionScript (Flash's programming language), eval cannot be used to evaluate arbitrary expressions. According to the Flash 8 documentation, its usage is limited to expressions which represent "the name of a variable, property, object, or movie clip to retrieve. This parameter can be either a String or a direct reference to the object instance."
ActionScript 3 does not support eval.
The ActionScript 3 Eval Library and the D.eval API are ongoing development projects to create equivalents to eval in ActionScript 3.
Lisp
Lisp was the original language to make use of an eval function in 1958. In fact, definition of the eval function led to the first implementation of the language interpreter.
Before the eval function was defined, Lisp functions were manually compiled to assembly language statements. However, once the eval function had been manually compiled it was then used as part of a simple read-eval-print loop which formed the basis of the first Lisp interpreter.
Later versions of the Lisp eval function have also been implemented as compilers.
The eval function in Lisp expects a form to be evaluated and executed as argument. The return value of the given form will be the return value of the call to eval.
This is an example Lisp code:
; A form which calls the + function with 1,2 and 3 as arguments.
; It returns 6.
(+ 1 2 3)
; In lisp any form is meant to be evaluated, therefore
; the call to + was performed.
; We can prevent Lisp from performing evaluation
; of a form by prefixing it with "'", for example:
(setq form1 '(+ 1 2 3))
; Now form1 contains a form that can be used by eval, for
; example:
(eval form1)
; eval evaluated (+ 1 2 3) and returned 6.
Lisp is well known to be very flexible and so is the eval function. For example, to evaluate the content of a string, the string would first have to be converted into a Lisp form using the read-from-string function and then the resulting form would have to be passed to eval:
(eval (read-from-string "(format t \"Hello World!!!~%\")"))
One major point of confusion is the question, in which context the symbols in the form will be evaluated. In the above example, form1 contains the symbol +. Evaluation of this symbol must yield the function for addition to make the example work as intended. Thus some dialects of lisp allow an additional parameter for eval to specify the context of evaluation (similar to the optional arguments to Python's eval function - see below). An example in the Scheme dialect of Lisp (R5RS and later):
;; Define some simple form as in the above example.
(define form2 '(+ 5 2))
;Value: form2
;; Evaluate the form within the initial context.
;; A context for evaluation is called an "environment" in Scheme slang.
(eval form2 user-initial-environment)
;Value: 7
;; Confuse the initial environment, so that + will be
;; a name for the subtraction function.
(environment-define user-initial-environment '+ -)
;Value: +
;; Evaluate the form again.
;; Notice that the returned value has changed.
(eval form2 user-initial-environment)
;Value: 3
Perl
In Perl, the eval function is something of a hybrid between an expression evaluator and a statement executor. It returns the result of the last expression evaluated (all statements are expressions in Perl programming), and allows the final semicolon to be left off.
Example as an expression evaluator:
$foo = 2;
print eval('$foo + 2'), "\n";
Example as a statement executor:
$foo = 2;
eval('$foo += 2; print "$foo\n";');
Perl also has eval blocks, which serves as its exception handling mechanism (see Exception handling syntax#Perl). This differs from the above use of eval with strings in that code inside eval blocks is interpreted at compile-time instead of run-time, so it is not the meaning of eval used in this article.
PHP
In PHP, eval executes code in a string almost exactly as if it had been put in the file instead of the call to eval(). The only exception is that errors are reported as coming from a call to eval(), and return statements become the result of the function.
Unlike some languages, the argument to eval must be a string of one or more complete statements, not just expressions; however, one can get the "expression" form of eval by putting the expression in a return statement, which causes eval to return the result of that expression.
Unlike some languages, PHP's eval is a "language construct" rather than a function, and so cannot be used in some contexts where functions can be, like higher-order functions.
Example using echo:
<?php
$foo = "Hello, world!\n";
eval('echo "$foo";');
?>
Example returning a value:
<?php
$foo = "Goodbye, world!\n"; //does not work in PHP5
echo eval('return $foo;');
?>
Lua
In Lua 5.1, loadstring compiles Lua code into an anonymous function.
Example as an expression evaluator:
loadstring("print('Hello World!')")()
Example to do the evaluation in two steps:
a = 1
f = loadstring("return a + 1") -- compile the expression to an anonymous function
print(f()) -- execute (and print the result '2')
Lua 5.2 deprecates loadstring in favor of the existing load function, which has been augmented to accept strings. In addition, it allows providing the function's environment directly, as environments are now upvalues.
print(load("print('Hello ' .. a)", "", "t", { a = "World!", print = print })())
PostScript
PostScript's exec operator takes an operand — if it is a simple literal it pushes it back on the stack. If one takes a string containing a PostScript expression however, one can convert the string to an executable which then can be executed by the interpreter, for example:
((Hello World) =) cvx exec
converts the PostScript expression
(Hello World) =
which pops the string "Hello World" off the stack and displays it on the screen, to have an executable type, then is executed.
PostScript's run operator is similar in functionality but instead the interpreter interprets PostScript expressions in a file, itself.
(file.ps) run
Python
In Python, the eval function in its simplest form evaluates a single expression.
eval example (interactive shell):
>>> x = 1
>>> eval('x + 1')
2
>>> eval('x')
1
The eval function takes two optional arguments, global and locals, which allow the programmer to set up a restricted environment for the evaluation of the expression.
The exec statement (or the exec function in Python 3.x) executes statements:
exec example (interactive shell):
>>> x = 1
>>> y = 1
>>> exec "x += 1; y -= 1"
>>> x
2
>>> y
0
The most general form for evaluating statements/expressions is using code objects. Those can be created by invoking the compile() function and by telling it what kind of input it has to compile: an "exec" statement, an "eval" statement or a "single" statement:
compile example (interactive shell):
>>> x = 1
>>> y = 2
>>> eval (compile ("print 'x + y = ', x + y", "compile-sample.py", "single"))
x + y = 3
D
D is a statically compiled language and therefore does not include an "eval" statement in the traditional sense, but does include the related "mixin" statement. The difference is that, where "eval" interprets a string as code at runtime, with a "mixin" the string is statically compiled like ordinary code and must be known at compile time. For example:
import std.stdio;
void main() {
int num = 0;
mixin("num++;");
writeln(num); // Prints 1.
}
The above example will compile to exactly the same assembly language instructions as if "num++;" had been written directly instead of mixed in. The argument to mixin doesn't need to be a string literal, but arbitrary expressions resulting in a string value, including function calls, that can be evaluated at compile time.
ColdFusion
ColdFusion's evaluate function lets you evaluate a string expression at runtime.
<cfset x = "int(1+1)">
<cfset y = Evaluate(x)>
It is particularly useful when you need to programatically choose the variable you want to read from.
<cfset x = Evaluate("queryname.#columnname#[rownumber]")>
Ruby
The Ruby programming language interpreter offers an eval function similar to Python or Perl, and also allows a scope, or binding, to be specified.
Aside from specifying a function's binding, eval may also be used to evaluate an expression within a specific class definition binding or object instance binding, allowing classes to be extended with new methods specified in strings.
a = 1
eval('a + 1') # (evaluates to 2)
# evaluating within a context
def get_binding(a)
binding
end
eval('a+1',get_binding(3)) # (evaluates to 4, because 'a' in the context of get_binding is 3)
class Test; end
Test.class_eval("def hello; return 'hello';end") # add a method 'hello' to this class
Test.new.hello # evaluates to "hello"
Forth
Most standard implementations of Forth have two variants of eval: EVALUATE and INTERPRET.
Win32FORTH code example:
S" 2 2 + ." EVALUATE \ Outputs "4"
BASIC
REALbasic
In REALbasic, there is a class called RBScript which can execute REALbasic code at runtime. RBScript is very sandboxed—only the most core language features are there, and you have to allow it access to things you want it to have. You can optionally assign an object to the context property. This allows for the code in RBScript to call functions and use properties of the context object. However, it is still limited to only understanding the most basic types, so if you have a function that returns a Dictionary or MySpiffyObject, RBScript will be unable to use it. You can also communicate with your RBScript through the Print and Input events.
VBScript
Microsoft's VBScript, which is an interpreted language, has two constructs. Eval is a function evaluator that can include calls to user-defined functions. (These functions may have side-effects such as changing the values of global variables.) Execute executes one or more colon-separated statements, which can change global state.
Both VBScript and JScript eval are available to developers of compiled Windows applications (written in languages which do not support Eval) through an ActiveX control called the Microsoft Script Control, whose Eval method can be called by application code. To support calling of user-defined functions, one must first initialize the control with the AddCode method, which loads a string (or a string resource) containing a library of user-defined functions defined in the language of one's choice, prior to calling Eval.
Visual Basic for Applications
Visual Basic for Applications (VBA), the programming language of Microsoft Office, is a virtual machine language where the runtime environment compiles and runs p-code. Its flavor of Eval supports only expression evaluation, where the expression may include user-defined functions and objects (but not user-defined variable names). Of note, the evaluator is different from VBS, and invocation of certain user-defined functions may work differently in VBA than the identical code in VBScript.
Smalltalk
As Smalltalk's compiler classes are part of the standard class library and usually present at run time, these can be used to evaluate a code string.
Compiler evaluate:'1 + 2'
Because class and method definitions are also implemented by message-sends (to class objects), even code changes are possible:
Compiler evaluate:'Object subclass:#Foo'
Tcl
The Tcl programming language has a command called eval, which executes the source code provided as an argument. Tcl represents all source code as strings, with curly braces acting as quotation marks, so that the argument to eval can have the same formatting as any other source code.
set foo {
while {[incr i]<10} {
puts "$i squared is [expr $i*$i]"
}
}
eval $foo
bs
bs has an eval function that takes one string argument. The function is both an expression evaluator and a statement executor. In the latter role, it can also be used for error handling. The following examples and text are from the bs man page as appears in the UNIX System V Release 3.2 Programmer's Manual.
Command-line interpreters
Unix shells
The eval command is present in all Unix shells, including the original "sh" (Bourne shell). It concatenates all the arguments with spaces, then re-parses and executes the result as a command.
Windows PowerShell
In Windows PowerShell, the Invoke-Expression Cmdlet serves the same purpose as the eval function in programming languages like JavaScript, PHP and Python.
The Cmdlet runs any Windows PowerShell expression that is provided as a command parameter in the form of a string and outputs the result of the specified expression.
Usually, the output of the Cmdlet is of the same type as the result of executing the expression. However, if the result is an empty array, it outputs $null. In case the result is a single-element array, it outputs that single element. Similar to JavaScript, Windows PowerShell allows the final semicolon to be left off.
Example as an expression evaluator:
PS > $foo = 2
PS > invoke-expression '$foo + 2'
Example as a statement executor:
PS > $foo = 2
PS > invoke-expression '$foo += 2; $foo'
Microcode
In 1966 IBM Conversational Programming System (CPS) introduced a microprogrammed function EVAL to perform "interpretive evaluation of expressions which are written in a modified Polish-string notation" on an IBM System/360 Model 50. Microcoding this function was "substantially more" than five times faster compared to a program that interpreted an assignment statement.
Theory
In theoretical computer science, a careful distinction is commonly made between eval and apply. Eval is understood to be the step of converting a quoted string into a callable function and its arguments, whereas apply is the actual call of the function with a given set of arguments. The distinction is particularly noticeable in functional languages, and languages based on lambda calculus, such as LISP and Scheme. Thus, for example, in Scheme, the distinction is between
(eval '(f x) )
where the form (f x) is to be evaluated, and
(apply f (list x))
where the function f is to be called with argument x.
Eval and apply are the two interdependent components of the eval-apply cycle, which is the essence of evaluating Lisp, described in SICP.
In category theory, the eval morphism is used to define the closed monoidal category. Thus, for example, the category of sets, with functions taken as morphisms, and the cartesian product taken as the product, forms a Cartesian closed category. Here, eval (or, properly speaking, apply) together with its right adjoint, currying, form the simply typed lambda calculus, which can be interpreted to be the morphisms of Cartesian closed categories.
References
External links
ANSI and GNU Common Lisp Document: eval function
Python Library Reference: eval built-in function
Jonathan Johnson on exposing classes to RBScript
Examples of runtime evaluation in several languages on Rosetta Code
Control flow
Unix SUS2008 utilities |
13202347 | https://en.wikipedia.org/wiki/ES7000 | ES7000 | The ES7000 is Unisys's x86/Windows, Linux and Solaris-based server product line. The "ES7000" brand has been used since 1999, although variants and models within the family support various processor and bus architectures. The server is marketed and positioned as a scale-up platform where scale-out becomes inefficient. Typically the ES7000 is utilized as a platform for homogeneous consolidation, large databases (SQL Server and Oracle), Business Intelligence, Decision Support Systems, ERP, virtualization, as well as large Linux application hosting.
The hardware and software elements of the server are monitored by a software suite known as Server Sentinel.
Architecture
Elements of the ES7000 architecture includes
Multiple power domains
N+1 redundancy for most components
Subpod CPU scaling (4 cpu increment)
Centralized memory/cache control
Shared cache
Point to point crossbar connections (fleXbar) among memory, processors, and I/O components
Multiple I/O PCI bridges and buses
Up to 8 direct I/O bridges each providing 3 independent PCI buses, supporting 96 PCI slots (On 100 and 200 series)
Multiple memory storage units that can be combined or used separately
History
This server family has undergone several model revisions in its lifetime since 1999. Initially, the servers were standalone—physically the configuration resembled a rack and took up a somewhat larger footprint than a rack (Models 100, 130, 200, 230, 550, 400). Second and third generation ES7000s were rack mountable cells 4U or 3U high that fit in standard 19" racks.
First generation systems
ES7000/100 Series - (1999/2000) Support for 32 Xeon processors, 64 GB RAM, 96 PCI slots under Microsoft Windows NT EE and Windows 2000 DC
Second generation systems
ES7000/200 Series - Support for up to 32 Xeon Processors, 64 GB RAM, 96 PCI slots under Microsoft Windows NT EE and Windows 2000 DC
ES7000/230 Series - Support for up to 32 Xeon Processors, 64 GB RAM, 96 PCI slots under Windows 2000 DC and Windows 2003 DC
ES7000/130 Series - Support for up to 32 Itanium processors
Third generation systems
ES7000/500 Series (510/420/530/540)
ES7000/550 Series - Support for up to 32 Xeon processors, 64 GB RAM, 96 PCI Slots under Windows 2000 DC and Windows 2003 DC
ES7000/400 Series (405/410/420/430/440) - Support for up to 32 Itanium processors and 128GB RAM under Windows 2000 DC and Windows 2003 DC
Fourth generation systems
ES7000/600 Series - Support for up to 32 Dual Core Xeon or Itanium processors, 256 GB RAM, 40 PCI Slots under Windows 2003 DC
ES7000/One Series - Support for up to 32 Dual Core Xeon or Itanium 2 processors, 256 GB RAM, 40 PCI Slots under Windows 2003 DC
Fifth generation systems
From late 2008, ES7000 7600R ("Kona") scalable from 1 cell of 24 cores to 4 cells of 96 cores of Xeon Hexcore and 1T of memory
Hexcore and high IO throughput crossbar make Kona twice the performance of previous top-of-the-line ES7000/one on half the cells, at a fraction of the price and 1/3 less rack space
Built for green, scale-up database, scale-up virtualization (HyperV, VMware) and application consolidation workloads gaining performance and cost savings relative to many smaller, scale-out boxes (administration/maintenance, floorspace, heating, cooling)
Number 1 TPC-E benchmark
Form factors
The ES7000 models are broken down into three form factors.
Cabinet/Frame size (A monolithic, midplane architecture, but deeper than a conventional rack)
4U Size Cell with up to 8 processors per cell and 8 PCI slots, up to four cells can be bound together to create a 32 cpu system)
3U Size Cell with up to 4 processor sockets per cell and 5 PCI slots, up to eight cells can be strapped together to create a 32 socket system)
Processors
The processors used in the ES7000 are:
Intel Xeon
Intel Multicore Xeon
Intel Itanium and Itanium 2
AMD Opteron
Operating systems supported
ES7000 servers support the Microsoft Windows operating system both 32-bit Xeon and 64-bit Itanium, 32-bit and 64-bit versions of some Linux operating systems, and the Solaris Operating System.
Windows: 2003 and Windows 2008
Linux: Novell SUSE Linux Enterprise Server and Red Hat Enterprise Linux
VMware: ESX 3.02 and ESX 3.5
Unisys OS2200
Unisys MCP
Solaris
References
External links
"Server Snapshots: Spotlight on Unisys" at ServerWatch Magazine
"ES7000 Enterprise Servers" at Unisys
"Unisys Server Sentinel" at Unisys
Server hardware |
62726843 | https://en.wikipedia.org/wiki/Mars%20Plus | Mars Plus | Mars Plus is a 1994 science fiction novel by American writer Frederik Pohl and Thomas T. Thomas. It is the sequel to Pohl's 1976 novel Man Plus, which is about a cyborg, Roger Torraway, who is designed to operate in the harsh Martian environment, so that humans can start to colonize Mars. Mars Plus is set fifty years after the first novel. Young Demeter Coghlan travels to Mars, now settled by humans and cyborgs, and finds herself amidst a rebellion by the colonists.
Plot
In Man Plus, set in the not-too-distant future, with threat of the Cold War becoming a fighting war, people plan for the colonization of Mars to escape the seemingly-inevitable Armageddon. The American government begins a cyborg program to create a being capable of surviving the harsh Martian environment: a "Man Plus" called Roger Torraway who is converted from man to cyborg. While his cyborg body is adapted to Mars, he feels strange at first. As more nations develop cyborgs, the computer networks of Earth become sentient.
Mars Plus is set fifty years after the first novel, when Mars is settled by humans, cyborgs, and beings that are a mix between the two. The cyborg Torroway is in the novel, but he is not the main character. The protagonist is Demeter Coghlan, a young woman from Earth who travels to Mars. Demeter is seeking information about a canyon that she believes may be significant if the colonists begin to convert Mars to an Earth-like planet. Amidst a backdrop of spies and newly dispatched Earth diplomats, the inexperienced Demeter senses that tensions are rising on the planet. She is further disoriented due to recovering from an accident. Despite the risks in the region, Demeter has intense sexual encounters with some of the local colonists. When the locals rebel against the surveillance set up by the computer network, Demeter is kidnapped by the computer network.
Reception
The reviewer from SFBook Reviews criticizes the book, saying "nothing really happens" and stating that there is no linkage to Man Plus apart from the presence of the cyborg Torraway; moreover, the reviewer states that the questions posed in the first novel are not answered.
SF Reviews calls Mars Plus "...not as good as Man Plus but...not bad", and it is praised for "...some nice touches: Demeter continuously forgetting to think about geology; her careless dictation to the computer and her irresistible urges for wild sex." SF Reviews criticizes the writing in Mars Plus for being "...a little careless in places" and in need of more "...more crafting and pruning."
References
External links
MIT profile of Pohl
1994 American novels
1994 science fiction novels
American science fiction novels
Novels set during the Cold War
Cyborgs in literature
Novels set on Mars
Artificial intelligence in fiction
Novels by Frederik Pohl
Baen Books books |
1971534 | https://en.wikipedia.org/wiki/CU-SeeMe | CU-SeeMe | CU-SeeMe (also written as CUseeMe or CUSeeMe depending on the source) is an Internet videoconferencing client. CU-SeeMe can make point to point video calls without a server or make multi-point calls through server software first called a "reflector" and later called a "conference server" or Multipoint Control Unit (MCU). Later commercial versions of CU-SeeMe could also make point-to-point or multi-point calls to other vendor's standards-based H.323 endpoints and servers.
History
CU-SeeMe was originally written by Tim Dorcey of the Information Technology department at Cornell University. It was first developed for the Macintosh in 1992 and later for the Windows platform in 1994. Originally it was video-only with audio added in 1994 for the Macintosh and 1995 for Windows. CU-SeeMe's audio came from Maven, an audio-only client developed at the University of Illinois at Urbana-Champaign.
CU-SeeMe was introduced to the public on April 26, 1993 as part of an NSF funded education project called the Global Schoolhouse.
"It is Not About the Technology" tells about Global SchoolNet's Global SchoolHouse Project using the first multi-point Internet-based video conferencing to connect schools in the United States and with schools worldwide. By sending video and audio signals over the Internet using CU-SeeMe software, students were able to see and hear each other while they worked on collaborative assignments. As part of the program they interacted with special guests, such as Vice President Al Gore, the anthropologist Jane Goodall, Senator Dianne Feinstein, and surgeon general C. Everett Koop.
In July 1993 now defunct London cable channel Channel One Television used CU-SeeMe to simulcast its programme Digital World on the Internet, becoming the first UK television programme to broadcast live on the web. The programme was frame-grabbed every 2 frames using a macro written in Windows by duo Thibault & Rav.
In 1994 WXYC used CU-SeeMe to simulcast its signal to the net and so became the world's first internet radio station.
On Thanksgiving morning in 1995, World News Now was the first television program to be broadcast live on the Internet, using a CU-SeeMe interface. Victor Dorff, a producer of WNN at the time, arranged to have the show simulcast on the Internet daily for a six-month trial period. CU-SeeMe was also used in a taped interview segment in which anchor Kevin Newman and Global Schoolhouse director and founder Dr. Yvonne Marie Andres discussed the future of computers in communication.
In March 1996, CU-SeeMe was used for the first ever live internet broadcast of a musical theatre performance with the production of Cowboys in Love: The Hank Plowplucker Story. The show was produced by The Ethereal Mutt - Limited, and the stream was a partnership between Emutt and the CIS staff at Arizona State University.
The Internet Phone Connection, written by Cheryl L. Kirk was one of the first consumer books to feature CuSeeMe. The book outlined how to use the program to communicate across the globe.
From freeware to commercial
CU-SeeMe 2.x was released as a commercial product in 1995 through an agreement with Cornell University. The full commercial licensing rights were transferred to White Pine Software in 1998.
Decline
While not directly competing against hardware-assisted video-conferencing companies, it suffered in that the nascent market was expecting hardware quality audio and video when CPUs of that time weren't really ready to support that quality level in software. Early wide acceptance of CU-SeeMe outside of the hobbyist market was limited by its relatively poor audio/video quality and excessive latency. While the commercial and freeware products were useful to hobbyists, CU-SeeMe and its accompanying server product were beginning to build a following in education - with up to 40% of commercial sales from educational establishments. A spinoff application called ClassPoint which was based on CU-SeeMe and the conference server was released commercially in 1998. It was an early attempt to add features to a real-time collaboration product specifically designed for K-12 education users.
The United States military was a large customer of the technology, making use of the CU-SeeMe Conference Server MCU for many applications, including using the T.120 server for Microsoft NetMeeting endpoints.
White Pine locked out users of version 1.0 from using its free, public videoconferencing chatrooms. As users upgraded to the commercially available version, some were frustrated to discover that others were downloading the trial version and using software registration keys readily supplied by some participants on White Pine's public chatrooms.
Changing names and changing hands
White Pine Software was briefly renamed CUseeMe Networks, then merged with First Virtual Communications. The commercial standalone client was decommissioned (an independent company used a version of the embedded commercial CU-SeeMe client renamed "CU" as part of a fee-based video chat service called CUworld). The commercial client and server environment evolved further, was renamed "Click To Meet" and launched along with an enhanced and more scalable version of the software MCU.
On March 15, 2005, Radvision Ltd. acquired all of the substantial assets and intellectual property of First Virtual Communications (FVC), including its 'Click to Meet' (formerly CUSeeMe) and Conference Server. Radvision was acquired by Avaya in June 2012. Spirent Communications acquired Radvision's Technology Business Unit from Avaya in July 2014. The descendants of the CU-SeeMe technology live on in part in the Radvision Scopia product line.
There is still a small but active community of users of the original CU-SeeMe releases. Although there have been no releases of software from the various incarnations of White Pine since roughly 2000, freeware alternatives are available for both the Windows and Macintosh platform.
CU-SeeMe as part of the legacy of the early Internet
The CU-SeeMe name and legacy remains important for a number of reasons:
CU-SeeMe was an early, widely recognized internet computer application that almost predated the World-Wide-Web;
CU-SeeMe foretold the wider acceptance of videotelephony in a number of markets, and was likely the first product to be referenced using the term 'video chat';
CU-SeeMe software on the client and server sides were one of the first platforms that proved that IP networks could be effectively used for real-time communication and collaboration
See also
Trojan Room coffee pot
References
External links
Internet TV With CU-SeeMe (by Mickey Sattler, co-authored by John Lauer)
Scientist on Tap (by Yvonne Marie Andres)
Groupware
Teleconferencing
Computer-related introductions in 1992
Videotelephony |
16453693 | https://en.wikipedia.org/wiki/4707%20Khryses | 4707 Khryses | 4707 Khryses is a larger Jupiter trojan from the Trojan camp, approximately in diameter. It was discovered on 13 August 1988, by American astronomer Carolyn Shoemaker at the Palomar Observatory in California. The assumed C-type asteroid has a rotation period 6.9 hours and likely an elongated shape. It was named after the Trojan priest Chryses (Khryseis) from Greek mythology.
Orbit and classification
Khryses is a Jupiter trojan in a 1:1 orbital resonance with Jupiter. It is located in the trailering Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It orbits the Sun at a distance of 4.6–5.8 AU once every 11 years and 10 months (4,322 days; semi-major axis of 5.19 AU). Its orbit has an eccentricity of 0.12 and an inclination of 7° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar in August 1953, or 35 years prior to its official discovery observation.
Naming
This minor planet was named after Trojan Chryses (Khryseis), a priest of Apollo. His daughter Chryseis (Khryseis) was abducted by Agamemnon during the Trojan War. Apollo then sent a plague sweeping through the Greek camp, forcing Agamemnon to give back the priest's daughter. The official naming citation was published by the Minor Planet Center on 28 April 1991 ().
Physical characteristics
Khryses is an assumed, carbonaceous C-type asteroid. Most Jupiter trojans are D-types, with the remainder being mostly C- and P-type asteroids.
Rotation period
Since 2013, several rotational lightcurves of Khryses have been obtained from photometric observations by Daniel Coley and Robert Stephens at the Center for Solar System Studies in Landers, California. Best-rated lightcurve by Daniel Coley from four nights of observations in 2017 gave a well-defined rotation period of hours with a brightness amplitude of 0.41 magnitude, which indicates that the body has a non-spherical shape ().
Diameter and albedo
According to the survey carried out by the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Khryses measures 37.77 kilometers in diameter and its surface has an albedo of 0.086, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 42.23 kilometers based on an absolute magnitude of 10.6.
Notes
References
External links
Asteroid Lightcurve Database (LCDB), query form (info )
Dictionary of Minor Planet Names, Google books
Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center
Asteroid 4707 Khryses at the Small Bodies Data Ferret
004707
Discoveries by Carolyn S. Shoemaker
Minor planets named from Greek mythology
Named minor planets
19880813 |
714231 | https://en.wikipedia.org/wiki/AirPort%20Express | AirPort Express | The AirPort Express is a Wi-Fi base station product from Apple Inc., part of the AirPort product line. While more compact and in some ways simpler than another Apple Wi-Fi base station, the AirPort Extreme, the Express offers audio output capability the Extreme lacks. The AirPort Express was the first AirPlay device to receive streamed audio from a computer running iTunes on the local network. AirPort Express outperforms the stringent requirements of the ENERGY STAR Program Requirements for Small Network Equipment (SNE) Version 1.0.
Apple discontinued developing its wireless routers in 2018, but continues limited support of later models.
Description
When connected to an Ethernet network, the Express can function as a wireless access point. The latest model allows up to 50 networked users. It can be used as an Ethernet-to-wireless bridge under certain wireless configurations. It can be used to extend the range of a network, including functioning as a printer and audio server. The model introduced in June 2012 includes two Ethernet ports: one WAN and one LAN.
The original version (M9470LL/A, model A1084) was introduced by Apple on July 7, 2004, and included an analog–optical audio mini-jack output, a USB port for remote printing or charging the iPod (iPod shuffle only), and one Ethernet port. The main processor of the 802.11g AirPort Express was a Broadcom BCM4712KFB wireless networking chipset, which incorporated a 200 MHz MIPS processor. The audio was handled by a Texas Instruments Burr-Brown PCM2705 16-bit digital-to-analog converter.
An updated version (MB321LL/A, model A1264) supporting the faster 802.11 Draft-N draft specification and operation in either of the 2.4 GHz and 5 GHz bands, with almost all other features identical, was introduced by Apple in March 2008. The revised unit includes an 802.11a/n (5 GHz) mode, which allows adding Draft-N to an existing 802.11b/g network without disrupting existing connections, while preserving the increased throughput that Draft-N can provide. Up to 10 wireless units can connect to this AirPort Express.
The AirPort Express uses an audio connector that combines a 3.5 mm minijack socket and a mini-TOSLINK optical digital transmitter, allowing connection to an external digital-to-analog converter (DAC) or amplifier with internal DAC. Standard audio CDs ripped in iTunes into Apple Lossless format streamed to the AirPort Express will output a bit-for-bit identical bitstream when compared to the original CD (provided any sound enhancement settings in iTunes are disabled). DTS-encoded CDs ripped to Apple Lossless audio files - which decode as digital white noise in iTunes - will play back correctly when the AirPort Express is connected via TOSLINK to a DTS-compatible amplifier–decoder. This is limited to 16-bit and 44.1 kHz when streaming from iTunes; any higher quality content, such as high fidelity audio that uses up to 24-bit and/or 192 kHz will be truncated down to 16-bit and 44.1 kHz.
The audio output feature of the AirPort Express on a system running OS X Lion or earlier can only be used to wirelessly stream audio files from within iTunes to an attached stereo system. It cannot be used to output the soundtrack of iTunes video content to an attached stereo. OS X Mountain Lion introduced AirPlay support, a feature to output Mac system-wide audio directly to AirPort Express. This allows output of the audio of protected video content within iTunes, and also correctly maintains the audio sync with the image displayed on-screen. Video is synced with output audio when playing the video through an AirPort Express if the video is in a format supported by QuickTime Player (such as HTML 5 video in Safari etc.).
For Windows and Mac operating systems (before OS X Mountain Lion) there are a few software options available for streaming system-wide audio to the AirPort Express, such as Airfoil, TuneBlade and Porthole.
On August 28, 2018 Apple added AirPlay 2 support to the 2012 AirPort Express, providing the ability to be added to the Apple Home app as an audio destination.
Discontinuation and support
According to a Bloomberg report on November 21, 2016, "Apple Inc. has disbanded its division that develops wireless routers, another move to try to sharpen the company’s focus on consumer products that generate the bulk of its revenue, according to people familiar with the matter."
In an April 2018 statement to 9to5Mac, Apple announced the discontinuation of its AirPort line, effectively leaving the consumer router market. Apple continued supporting the AirPort Express, although an older version of its "AirPort Utility" is required to support the earliest version of the device.
Models
See also
AirPort Time Capsule
Notes
Apple Inc. peripherals
2004 introductions
Discontinued Apple Inc. products |
40675585 | https://en.wikipedia.org/wiki/PirateBox | PirateBox | A PirateBox is a portable electronic device, often consisting of a Wi-Fi router and a device for storing information, creating a wireless network that allows users who are connected to share files anonymously and locally. By design, this device is disconnected from the Internet.
The PirateBox was originally designed to exchange data freely under the public domain or under a free license.
History
The PirateBox was designed in 2011 by David Darts, a professor at the Steinhardt School of Culture, Education and Human Development at New York University under Free Art License. It has since become highly popular in Western Europe, particularly in France by Jean Debaecker, and its development is largely maintained by Matthias Strubel. The usage of the PirateBox-Concept turns slowly away from common local filesharing to purposes in education, concerning public schools or private events like CryptoParties, a crucial point also being circumvention of censorship since it can be operated behind strong physical barriers.
On 17 November 2019, Matthias Strubel announced the closure of the Pirate Box project, citing more routers having locked firmware and browsers forcing https.
Set up
As of version 1.0, there is an improved installation path, with only a few steps followed by an automatic install.
Raspberry Pi Setup
The PirateBox can be set up in Raspberry Pi. The steps can be followed in the reference article.
Uses
Users connect to the PirateBox via Wi-Fi (using a laptop, for example) without having to learn the password. They can then access the local web page of the PirateBox to download or upload files, or access an anonymous chat room or forum. All such data exchanges are confined to the PirateBox's local network and are not connected to the Internet.
Several educational projects use the devices to deliver content to students allowing them to share by chat or forum. The PirateBox is also used in places where Internet access is rare or impractical.
Devices which can be converted to a PirateBox
Android (v2.3+) devices: unofficial porting allowing to run a PirateBox on some rooted Android devices (example: smartphone and tablet computer). PirateBox for Android is available from Google Play (since June 2014).
PirateBox Live USB: allows one to turn a computer temporarily into a PirateBox
Raspberry Pi
Chip
Wi-Fi routers
Not an exhaustive list:
TP-Link MR3020 – the first device modified by Darts
TP-Link MR3040
Zsun WiFi Card Reader - hacked by installing OpenWRT, and there are efforts to produce easy installation instructions for PirateBox on this device.
The PirateBox official wiki has an up-to-date hardware-list of compatible devices.
See also
USB dead drop, a similar concept
FreedomBox, a project similar to the PirateBox (plug computer version)
Shoutr, a similar Android solution
Router (computing)
Sneakernet
References
External links
Official Page, official forum, Wiki of the main developer
Linuxjournal.com
A Pirate Box for Sharing Files
PirateBox Takes File-Sharing Off The Radar and Offline, For Next To Nothing (TorrentFreak, March 2012)
PirateBox: an "artistic provocation" in lunchbox form
File sharing
Computer art |
1703451 | https://en.wikipedia.org/wiki/Cray%20Time%20Sharing%20System | Cray Time Sharing System | The Cray Time Sharing System, also known in the Cray user community as CTSS, was developed as an operating system for the Cray-1 or Cray X-MP line of supercomputers. CTSS was developed by the Los Alamos Scientific Laboratory (LASL now LANL) in conjunction with the Lawrence Livermore Laboratory (LLL now LLNL). CTSS was popular with Cray sites in the United States Department of Energy (DOE), but was used by several other Cray sites, such as the San Diego Supercomputing Center.
Overview
The predecessor of CTSS was the Livermore Time Sharing System (LTSS) which ran on Control Data CDC 7600 line of supercomputers. The first compiler was known as LRLTRAN, for Lawrence Radiation Laboratory forTRAN, a FORTRAN 66 programming language but with dynamic memory and other features. The Cray version, including automatic vectorization, was known as CVC, pronounced "Civic" like the Honda car of the period, for Cray Vector Compiler.
Some controversy existed at LASL with the first attempt to develop an operating system for the Cray-1 named DEIMOS, a message-passing, Unix-like operating system, by Forrest Basket. DEIMOS had initial "teething" problems common to the performance of all early operating systems. This left a bad taste for Unix-like systems at the National Laboratories and with the manufacturer, Cray Research, Inc., of the hardware who went on to develop their own batch oriented operating system, COS (Cray Operating System) and their own vectorizing Fortran compiler named "CFT" (Cray ForTran) both written in the Cray Assembly Language (CAL).
CTSS had the misfortune to have certain constants, structures, and lacking certain networking facilities (TCP/IP) which were optimized to be Cray-1 architecture-dependent without extensive rework when larger memory supercomputers like the Cray-2 and the Cray Y-MP came into use. CTSS has its final breaths running on Cray instruction-set-compatible hardware developed by Scientific Computer Systems (SCS-40 and SCS-30) and Supertek S-1, but this did not save the software.
CTSS embodied certain unique ideas such as a market-driven priorities for working/running processes.
An attempt to succeed CTSS was started by LLNL named NLTSS (New Livermore Time Sharing System) to embody advanced concepts for operating systems to better integrate communication using a new network protocol named LINCS while also keeping the best features of CTSS. NLTSS followed the development fate of many operating systems and only briefly ran on period Cray hardware of the late 1980s.
A user-level CTSS Overview from 1982 provides, in Chapter 2, a brief list of CTSS features. Other references are likely to be found in proceedings of the Cray User Group (CUG) and the ACM SOSP (Symp. on Operating Systems Proceedings). However, owing to the fact that LANL and LLNL were nuclear weapons facilities, some aspects of security are likely to doom finding out greater detail of many of these pieces of software.
See also
EOS (operating system)
Timeline of operating systems
References
Cray software
Fortran software
Time-sharing operating systems
Proprietary operating systems
Supercomputer operating systems |
29656 | https://en.wikipedia.org/wiki/Steve%20Ballmer | Steve Ballmer | Steven Anthony Ballmer (; March 24, 1956) is an American businessman and investor who served as the chief executive officer of Microsoft from 2000 to 2014. He is the current owner of the Los Angeles Clippers of the National Basketball Association (NBA). As of February 2022, Bloomberg Billionaires Index estimates his personal wealth at around $111 billion, making him the ninth-richest person on Earth.
Ballmer was hired by Bill Gates at Microsoft in 1980, and subsequently left the MBA program at Stanford University. He eventually became president in 1998, and replaced Gates as CEO on January 13, 2000. On February 4, 2014, Ballmer retired as CEO and was replaced by Satya Nadella; Ballmer remained on Microsoft's Board of Directors until August 19, 2014, when he left to prepare for teaching a new class.
His tenure and legacy as Microsoft CEO has received mixed reception, with the company tripling sales and doubling profits, but losing its market dominance and missing out on 21st-century technology trends such as the ascendance of smartphones in the form of iPhone and Android.
Early life and education
Ballmer was born in Detroit, Michigan; he is the son of Beatrice Dworkin and Frederic Henry Ballmer (Fritz Hans Ballmer), a manager at the Ford Motor Company. His father was a Swiss immigrant who predicted that his son, at eight years old, would attend Harvard. His mother was Belarusian Jewish. Through his mother, Ballmer is a second cousin of actress and comedian Gilda Radner. Ballmer grew up in the affluent community of Farmington Hills, Michigan. Ballmer also lived in Brussels from 1964 to 1967, where he attended the International School of Brussels.
In 1973, he attended college prep and engineering classes at Lawrence Technological University. He graduated as valedictorian from Detroit Country Day School, a private college preparatory school in Beverly Hills, Michigan, with a score of 800 on the mathematical section of the SAT and was a National Merit Scholar. He formerly sat on the school's board of directors. In 1977, he graduated magna cum laude from Harvard University with a Bachelor of Arts in applied mathematics and economics.
At college, Ballmer was a manager for the Harvard Crimson football team and a member of the Fox Club, worked on The Harvard Crimson newspaper as well as the Harvard Advocate, and lived down the hall from fellow sophomore Bill Gates. He scored highly in the William Lowell Putnam Mathematical Competition, an exam sponsored by the Mathematical Association of America, scoring higher than Bill Gates. He then worked as an assistant product manager at Procter & Gamble for two years, where he shared an office with Jeff Immelt, who later became CEO of General Electric. After briefly trying to write screenplays in Hollywood, in 1980 Ballmer dropped out of the Stanford Graduate School of Business to join Microsoft.
History with Microsoft
Ballmer joined Microsoft on June 11, 1980, and became Microsoft's 30th employee, the first business manager hired by Gates.
Ballmer was offered a salary of $50,000 as well as 5-10% of the company. When Microsoft was incorporated in 1981, Ballmer owned 8% of the company. In 2003, Ballmer sold 39.3 million Microsoft shares equating to approximately $955 million, thereby reducing his ownership to 4%. The same year, he replaced Microsoft's employee stock options program.
In the 20 years following his hire, Ballmer headed several Microsoft divisions, including operations, operating systems development, and sales and support. From February 1992 onwards, he was Executive Vice President, Sales, and Support. Ballmer led Microsoft's development of the .NET Framework. Ballmer was then promoted to President of Microsoft, a title that he held from July 1998 to February 2001, making him the de facto number two in the company to the chairman and CEO, Bill Gates.
Chief Executive Officer (2000–2014)
On January 13, 2000, Ballmer was officially named the chief executive officer. As CEO, Ballmer handled company finances and daily operations, but Gates remained chairman of the board and still retained control of the "technological vision" as chief software architect. Gates relinquished day-to-day activities when he stepped down as chief software architect in 2006, while staying on as chairman, and that gave Ballmer the autonomy needed to make major management changes at Microsoft.
When Ballmer took over as CEO, the company was fighting an antitrust lawsuit brought on by the U.S. government and 20 states, plus class-action lawsuits and complaints from rival companies. While it was said that Gates would have continued fighting the suit, Ballmer made it his priority to settle these saying: "Being the object of a lawsuit, effectively, or a complaint from your government is a very awkward, uncomfortable position to be in. It just has all downside. People assume if the government brought a complaint that there's really a problem, and your ability to say we're a good, proper, moral place is tough. It's actually tough, even though you feel that way about yourselves."
Upon becoming CEO, Ballmer required detailed business justification in order to approve of new products, rather than allowing hundreds of products that sounded potentially interesting or trendy. In 2005, he recruited B. Kevin Turner from Walmart, who was the President and CEO of Sam's Club, to become Microsoft's Chief Operating Officer. Turner was hired at Microsoft to lead the company's sales, marketing and services group and to instill more process and discipline in the company's operations and salesforce.
Since Bill Gates' retirement, Ballmer oversaw a "dramatic shift away from the company's PC-first heritage", replacing most major division heads in order to break down the "talent-hoarding fiefdoms", and Businessweek said that the company "arguably now has the best product lineup in its history". Ballmer was instrumental in driving Microsoft's connected computing strategy, with acquisitions such as Skype.
Under Ballmer's tenure as CEO, Microsoft's share price stagnated. The lackluster stock performance occurred despite Microsoft's financial success at that time. The company's annual revenue surged from $25 billion to $70 billion, while its net income increased 215% to $23 billion, and its gross profit of 75 cents on every dollar in sales is double that of Google or IBM. In terms of leading the company's total annual profit growth, Ballmer's tenure at Microsoft (16.4%) surpassed the performances of other well-known CEOs such as General Electric's Jack Welch (11.2%) and IBM's Louis V. Gerstner Jr. (2%). These gains came from the existing Windows and Office franchises, with Ballmer maintaining their profitability, fending off threats from competitors such as Linux and other open-source operating systems and Google Docs. Ballmer also built half a dozen new businesses, such as the data centers division and the Xbox entertainment and devices division ($8.9 billion), (which has prevented the Sony PlayStation and other gaming consoles from undermining Windows), and oversaw the acquisition of Skype. Ballmer also constructed the company's $20 billion Enterprise Business, consisting of new products and services such as Exchange, Windows Server, SQL Server, SharePoint, System Center, and Dynamics CRM, each of which initially faced an uphill battle for acceptance but have emerged as leading or dominant in each category. This diversified product mix helped to offset the company's reliance on PCs and mobile computing devices as the company entered the Post-PC era; in reporting quarterly results during April 2013, while Windows Phone 8 and Windows 8 had not managed to increase their market share above single digits, the company increased its profit 19% over the previous quarter in 2012, as the Microsoft Business Division (including Office 365) and Server and Tools division (cloud services) are each larger than the Windows division.
Ballmer attracted criticism for failing to capitalize on several new consumer technologies, forcing Microsoft to play catch-up in the areas of tablet computing, smartphones and music players with mixed results. Under Ballmer's watch, "In many cases, Microsoft latched onto technologies like smartphones, touchscreens, 'smart' cars and wristwatches that read sports scores aloud long before Apple or Google did. But it repeatedly killed promising projects if they threatened its cash cows [Windows and Office]." Ballmer was even named one of the worst CEOs of 2013 by the BBC. As a result of these many criticisms, in May 2012, hedge fund manager David Einhorn called on Ballmer to step down as CEO of Microsoft. "His continued presence is the biggest overhang on Microsoft's stock," Einhorn said in reference to Ballmer. In a May 2012 column in Forbes magazine, Adam Hartung described Ballmer as "the worst CEO of a large publicly traded American company", saying he had "steered Microsoft out of some of the fastest growing and most lucrative tech markets (mobile music, headsets and tablets)".
In 2009, and for the first time since Bill Gates resigned from day-to-day management at Microsoft, Ballmer delivered the opening keynote at CES.
As part of his plans to expand on hardware, on June 19, 2012, Ballmer revealed Microsoft's first ever computer device, a tablet called Microsoft Surface at an event held in Hollywood, Los Angeles. He followed this by announcing the company's purchase of Nokia's mobile phone division in September 2013, his last major acquisition for Microsoft as CEO.
On August 23, 2013, Microsoft announced that Ballmer would retire within the next 12 months. A special committee that included Bill Gates would decide on the next CEO.
There was a list of potential successors to Ballmer as Microsoft CEO, but all had departed the company: Jim Allchin, Brad Silverberg, Paul Maritz, Nathan Myhrvold, Greg Maffei, Pete Higgins, Jeff Raikes, J. Allard, Robbie Bach, Bill Veghte, Ray Ozzie, Bob Muglia and Steven Sinofsky. B. Kevin Turner, Microsoft's Chief Operating Officer (COO), was considered by some to be a de facto number two to Ballmer, with Turner having a strong grasp of business and operations but lacking technological vision. On February 4, 2014, Satya Nadella succeeded Ballmer as CEO.
Public image
Although as a child he was so shy that he would hyperventilate before Hebrew school, Ballmer is known for his energetic and exuberant personality, which is meant to motivate employees and partners, shouting so much that he needed surgery on his vocal cords.
Ballmer's flamboyant stage appearances at Microsoft events are widely circulated on the Internet as viral videos. One of his earliest known viral videos was a parody video, produced for Microsoft employees in 1986, promoting Windows 1.0 in the style of a Crazy Eddie commercial. Ballmer and Brian Valentine repeated this in a spoof promotion of Windows XP later on.
A widely circulated video was his entrance on stage at Microsoft's 25th anniversary event in September 2000, where Ballmer jumped across the stage and shouted "I love this company!" Another well-known viral video was one captured at a Windows 2000 developers' conference, featuring a perspiring Ballmer chanting the word "developers".
Relationship with Bill Gates
Ballmer was Gates' best man at his wedding to Melinda French, and the two men described their relationship as a marriage. They were so close for years that another Microsoft executive described it as a mind meld. Combative debates—a part of Microsoft's corporate culture—that many observers believed were personal arguments occurred within the relationship; while Gates was glad in 2000 that Ballmer was willing to become CEO so he could focus on technology, the Wall Street Journal reported that there was tension surrounding the transition of authority. Things became so bitter that, on one occasion, Gates stormed out of a meeting after a shouting match in which Ballmer jumped to the defense of several colleagues, according to an individual present at the time. After the exchange, Ballmer seemed "remorseful", the person said. Once Gates leaves, "I'm not going to need him for anything. That's the principle", Ballmer said. "Use him, yes, need him, no".
In October 2014, a few months after Ballmer left his post at Microsoft, a Vanity Fair profile stated that Ballmer and Gates no longer talk to each other due to animosity over Ballmer's resignation. In a November 2016 interview, Ballmer said he and Gates have "drifted apart" ever since, saying that they always had a "brotherly relationship" beforehand. He said that his push into the hardware business, specifically smartphones, which Gates did not support, contributed to their relationship breakdown.
Retirement
After saying in 2008 that he intended to remain CEO for another decade, Ballmer announced his retirement in 2013, after losing billions of dollars in acquisitions and on the Surface tablet. Microsoft's stock price rebounded on the news.
Ballmer says that he regretted the lack of focus on Windows Mobile in the early 2000s, leaving Microsoft a distant third in the current smartphone market. Moreover, he attributed the success of the expensively-priced iPhones to carrier subsidies. He went on to say, He called the acquisition of the mobile phone division of Nokia as his "toughest decision" during his tenure, as it was overseeing the changing profile of Microsoft as it was expanding on hardware.
Ballmer hosted his last company meeting in September 2013, and stepped down from the company's board of directors, in August 2014.
On December 24, 2014, the Seattle Times reported that the IRS sued Ballmer, Craig Mundie, Jeff Raikes, Jim Allchin, Orlando Ayala and David Guenther in an effort to compel them to testify in Microsoft's corporate tax audit. The IRS has been looking into how Microsoft and other companies deal with transfer pricing.
Other positions
Ballmer served as director of Accenture Ltd. and a general partner of Accenture SCA from 2001 to 2006.
On competing companies and software
Apple
In 2007, Ballmer said "There's no chance that the [Apple] iPhone is going to get any significant market share. No chance."
Speaking at a conference in NYC in 2009, Ballmer criticized Apple's pricing, saying, "Now I think the tide has turned back the other direction (against Apple). The economy is helpful. Paying an extra $500 for a computer in this environment—same piece of hardware—paying $500 more to get a logo on it? I think that's a more challenging proposition for the average person than it used to be."
In 2015, Ballmer called Microsoft's decision to invest in Apple to save it from bankruptcy in 1997 as the "craziest thing we ever did." By 2015, Apple was the world's most valuable company.
In 2016, Ballmer did an interview with Bloomberg where Ballmer added context to his iPhone statement, saying "People like to point to this quote...but the reason I said that was the price of $600-$700 was too high," he says he did not realize the business model innovation that Apple was going to deploy, using the carriers to subsidize the phones by building the cost into the customer's monthly bill.
Free and open source software
In July 2000, Ballmer called the free software Linux kernel "communism" and further claimed that it infringed with Microsoft's intellectual property. In June 2001 he called Linux a "cancer that attaches itself in an intellectual property sense to everything it touches". Ballmer used the notion of "viral" licensing terms to express his concern over the fact that the GNU General Public License (GPL) employed by such software requires that all derivative software be under the GPL or a compatible license. In April 2003 he even interrupted his skiing holiday in Switzerland to personally plead with the mayor of Munich not to switch to Linux. But he did not succeed with this and Munich switched to LiMux, despite his offering a 35% discount at his lobbying visit.
In March 2016, Ballmer changed his stance on Linux, saying that he supports his successor Satya Nadella's open source commitments. He maintained that his comments in 2001 were right at the time but that times have changed.
Google
In 2005, Microsoft sued Google for hiring one of its previous vice presidents, Kai-Fu Lee, claiming it was in violation of his one-year non-compete clause in his contract. Mark Lucovsky, who left for Google in 2004, alleged in a sworn statement to a Washington state court that Ballmer became enraged upon hearing that Lucovsky was about to leave Microsoft for Google, picked up his chair, and threw it across his office, and that, referring to then Google Executive Chairman Eric Schmidt (who had previously worked for competitors Sun and Novell), Ballmer vowed to "kill Google." Lucovsky reports:
Ballmer then resumed attempting to persuade Lucovsky to stay at Microsoft. Ballmer has described Lucovsky's account of the incident as a "gross exaggeration of what actually took place".
During the 2011 Web 2.0 Summit in San Francisco, he said: "You don't need to be a computer scientist to use a Windows Phone and you do to use an Android phone ... It is hard for me to be excited about the Android phones."
In 2013, Ballmer said that Google was a "monopoly" that should be pressured from market competition authorities.
Sports
On March 6, 2008, Seattle mayor Greg Nickels announced that a local ownership group involving Ballmer made a "game-changing" commitment to invest $150 million in cash toward a proposed $300 million renovation of KeyArena and were ready to purchase the Seattle SuperSonics from the Professional Basketball Club LLC in order to keep the team in Seattle. However, this initiative failed, and the SuperSonics relocated to Oklahoma City, Oklahoma, where they now play as the Oklahoma City Thunder.
In June 2012, Ballmer was an investor in Chris R. Hansen's proposal to build a new arena in the SoDo neighborhood of Seattle and bring the SuperSonics back to Seattle. On January 9, 2013, Ballmer and Hansen led a group of investors in an attempt to purchase the Sacramento Kings from the Maloof family and relocate them to Seattle for an estimated $650 million. However, this attempt also fell through.
Following the Donald Sterling scandal in May 2014, Ballmer was the highest bidder in an attempt to purchase the Los Angeles Clippers for a reported price of $2 billion, which is the second highest bid for a sports franchise in North American sports history (after the $2.15 billion sale of the Los Angeles Dodgers in 2012). After a California court confirmed the authority of Shelly Sterling to sell the team, it was officially announced on August 12, 2014, that Ballmer would become the Los Angeles Clippers owner.
On September 25, 2014, Ballmer said he would bar the team from using Apple products such as iPads, and replace them with Microsoft products. It has been reported that he had previously also barred his family from using iPhones.
In March 2020, Ballmer agreed to buy The Forum in Inglewood, California. The purchase would allow him to build the Intuit Dome in the nearby area since plans for a new Clippers' arena were opposed by the former owners of The Forum.
In a survey conducted by The Athletic in December 2020, Ballmer was voted the best owner in basketball.
Wealth
Ballmer was the second person after Roberto Goizueta to become a billionaire in U.S. dollars based on stock options received as an employee of a corporation in which he was neither a founder nor a relative of a founder. As of November 2021, Bloomberg Billionaires Index estimates his personal wealth at $117 billion, ranking him as the 8th richest person in the world.
Philanthropy
On November 12, 2014, it was announced that Ballmer and his wife Connie donated $50 million to the University of Oregon. Connie Ballmer is a University of Oregon alumna and previously served on the institution's board of trustees. The funds will go towards the university's $2 billion fundraising effort, and will focus towards scholarships, public health research and advocacy, and external branding/communications.
On November 13, 2014, it was announced that Ballmer would provide a gift, estimated at $60 million, to Harvard University's computer science department. The gift would allow the department to hire new faculty, and hopefully increase the national stature of the program. Ballmer previously donated $10 million to the same department in 1994, in a joint-gift with Bill Gates.
Ballmer serves on the World Chairman's Council of the Jewish National Fund, which means he has donated US$1 million or more to the JNF.
USAFacts
Ballmer launched USAFacts.org in 2017, a non-profit organization whose goal is to allow people to understand US government revenue, spending and societal impact. He is reported to have contributed $10 million to fund teams of researchers who populated the website's database with official data.
Personal life
In 1990, Ballmer married Connie Snyder; they have three sons.
The Ballmers live in Hunts Point, Washington.
References
External links
Corporate biography
CS50 Lecture by Steve Ballmer at Harvard University, November 2014
South China Morning Post audio interview
Steve Ballmer Playlist Appearance on WMBR's Dinnertime Sampler radio show February 23, 2005
Forbes Profile
1956 births
20th-century American businesspeople
21st-century American businesspeople
American billionaires
American chief operating officers
American computer businesspeople
American Internet celebrities
American people of Belarusian-Jewish descent
American people of Swiss descent
American technology chief executives
Businesspeople from Detroit
Businesspeople from Seattle
Businesspeople in software
Chevaliers of the Légion d'honneur
Detroit Country Day School alumni
Directors of Microsoft
The Harvard Crimson people
Harvard Advocate alumni
Jewish American sportspeople
Living people
Los Angeles Clippers owners
Microsoft employees
People from Farmington Hills, Michigan
People from Hunts Point, Washington
Stanford University students
21st-century American Jews |
187273 | https://en.wikipedia.org/wiki/Han%20unification | Han unification | Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters. Han characters are a feature shared in common by written Chinese (hanzi), Japanese (kanji), Korean (hanja) and Vietnamese (chữ Hán).
Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character. In the formulation of Unicode, an attempt was made to unify these variants by considering them different glyphs representing the same "grapheme", or orthographic unit, hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan. Nevertheless, many characters have regional variants assigned to different code points, such as Traditional (U+500B) versus Simplified (U+4E2A).
Unihan can also refer to the Unihan Database maintained by the Unicode Consortium, which provides information about all of the unified Han characters encoded in the Unicode Standard, including mappings to various national and industry standards, indices into standard dictionaries, encoded variants, pronunciations in various languages, and an English definition. The database is available to the public as text files and via an interactive website. The latter also includes representative glyphs and definitions for compound words drawn from the free Japanese EDICT and Chinese CEDICT dictionary projects (which are provided for convenience and are not a formal part of the Unicode Standard).
Rationale and controversy
The Unicode Standard details the principles of Han unification.
The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process.
One possible rationale is the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs. Unicode was later extended to 21 bits allowing many more CJK characters (92,865 are assigned, with room for more).
The article The secret life of Unicode, located on IBM DeveloperWorks attempts to illustrate part of the motivation for Han unification:
In fact, the three ideographs for "one" (, , or ) are encoded separately in Unicode, as they are not considered national variants. The first is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be considered variants).
However, Han unification has also caused considerable controversy, particularly among the Japanese public, who, with the nation's literati, have a history of protesting the culling of historically and culturally significant variants. (See . Today, the list of characters officially recognized for use in proper names continues to expand at a modest pace.)
In 1993, the Japan Electronic Industries Development Association (JEIDA) published a pamphlet titled "" (We are feeling anxious for the future character encoding system ), summarizing major criticism against the Han Unification approach adopted by Unicode.
Aditya Mukerjee criticized the effort as an attempt to create an artificial, limited set of characters, rather than fully recognize diversity of Asian languages, and compared Han unification to a hypothetical unification of European alphabets, including English and Russian, as sharing the same root in Greek. He also pointed out at fast growing emoji subset of Unicode, leading to an absurd situation where he can type a 💩 U+1F4A9 PILE OF POO character having its own code point, but he can't properly type his first name in Bengali without resorting to substitute characters.
Graphemes versus glyphs
A grapheme is the smallest abstract unit of meaning in a writing system. Any grapheme has many possible glyph expressions, but all are recognized as the same grapheme by those with reading and writing knowledge of a particular writing system. Although Unicode typically assigns characters to code points to express the graphemes within a system of writing, the Unicode Standard (section 3.4 D7) does caution:
An abstract character does not necessarily correspond to what a user thinks of as a "character" and should not be confused with a grapheme.
However, this quote refers to the fact that some graphemes are composed of several characters. So, for example, the character
combined with (i.e. the combination "å") might be understood by a user as a single grapheme while being composed of multiple Unicode abstract characters. In addition, Unicode also assigns some code points to a small number (other than for compatibility reasons) of formatting characters, whitespace characters, and other abstract characters that are not graphemes, but instead used to control the breaks between lines, words, graphemes and grapheme clusters. With the unified Han ideographs, the Unicode Standard makes a departure from prior practices in assigning abstract characters not as graphemes, but according to the underlying meaning of the grapheme: what linguists sometimes call sememes. This departure therefore is not simply explained by the oft quoted distinction between an abstract character and a glyph, but is more rooted in the difference between an abstract character assigned as a grapheme and an abstract character assigned as a sememe. In contrast, consider ASCII's unification of punctuation and diacritics, where graphemes with widely different meanings (for example, an apostrophe and a single quotation mark) are unified because the glyphs are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning.
For a grapheme to be represented by various glyphs means that the grapheme has glyph variations that are usually determined by selecting one font or another or using glyph substitution features where multiple glyphs are included in a single font. Such glyph variations are considered by Unicode a feature of rich text protocols and not properly handled by the plain text goals of Unicode. However, when the change from one glyph to another constitutes a change from one grapheme to another—where a glyph cannot possibly still, for example, mean the same grapheme understood as the small letter "a"—Unicode separates those into separate code points. For Unihan the same thing is done whenever the abstract meaning changes, however rather than speaking of the abstract meaning of a grapheme (the letter "a"), the unification of Han ideographs assigns a new code point for each different meaning—even if that meaning is expressed by distinct graphemes in different languages. Although a grapheme such as "ö" might mean something different in English (as used in the word "coördinated") than it does in German, it is still the same grapheme and can be easily unified so that English and German can share a common abstract Latin writing system (along with Latin itself). This example also points to another reason that "abstract character" and grapheme as an abstract unit in a written language do not necessarily map one-to-one. In English the combining diaeresis, "¨", and the "o" it modifies may be seen as two separate graphemes, whereas in languages such as Swedish, the letter "ö" may be seen as a single grapheme. Similarly in English the dot on an "i" is understood as a part of the "i" grapheme whereas in other languages, such as Turkish, the dot may be seen as a separate grapheme added to the dotless "ı".
To deal with the use of different graphemes for the same Unihan sememe, Unicode has relied on several mechanisms: especially as it relates to rendering text. One has been to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean. Also font formats such as OpenType allow for the mapping of alternate glyphs according to language so that a text rendering system can look to the user's environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode to define a consistent way of encoding multilingual text.
So rather than treat the issue as a rich text problem of glyph alternates, Unicode added the concept of variation selectors, first introduced in version 3.2 and supplemented in version 4.0. While variation selectors are treated as combining characters, they have no associated diacritic or mark. Instead, by combining with a base character, they signal the two character sequence selects a variation (typically in terms of grapheme, but also in terms of underlying meaning as in the case of a location name or other proper noun) of the base character. This then is not a selection of an alternate glyph, but the selection of a grapheme variation or a variation of the base abstract character. Such a two-character sequence however can be easily mapped to a separate single glyph in modern fonts. Since Unicode has assigned 256 separate variation selectors, it is capable of assigning 256 variations for any Han ideograph. Such variations can be specific to one language or another and enable the encoding of plain text that includes such grapheme variations.
Unihan "abstract characters"
Since the Unihan standard encodes "abstract characters", not "glyphs", the graphical artifacts produced by Unicode have been considered temporary technical hurdles, and at most, cosmetic. However, again, particularly in Japan, due in part to the way in which Chinese characters were incorporated into Japanese writing systems historically, the inability to specify a particular variant was considered a significant obstacle to the use of Unicode in scholarly work. For example, the unification of "grass" (explained above), means that a historical text cannot be encoded so as to preserve its peculiar orthography. Instead, for example, the scholar would be required to locate the desired glyph in a specific typeface in order to convey the text as written, defeating the purpose of a unified character set. Unicode has responded to these needs by assigning variation selectors so that authors can select grapheme variations of particular ideographs (or even other characters).
Small differences in graphical representation are also problematic when they affect legibility or belong to the wrong cultural tradition. Besides making some Unicode fonts unusable for texts involving multiple "Unihan languages", names or other orthographically sensitive terminology might be displayed incorrectly. (Proper names tend to be especially orthographically conservative—compare this to changing the spelling of one's name to suit a language reform in the US or UK.) While this may be considered primarily a graphical representation or rendering problem to be overcome by more artful fonts, the widespread use of Unicode would make it difficult to preserve such distinctions. The problem of one character representing semantically different concepts is also present in the Latin part of Unicode. The Unicode character for an apostrophe is the same as the character for a right single quote (’). On the other hand, the capital Latin letter A is not unified with the Greek letter Α or the Cyrillic letter А. This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set.
While the unification aspect of Unicode is controversial in some quarters for the reasons given above, Unicode itself does now encode a vast number of seldom-used characters of a more-or-less antiquarian nature.
Some of the controversy stems from the fact that the very decision of performing Han unification was made by the initial Unicode Consortium, which at the time was a consortium of North American companies and organizations (most of them in California), but included no East Asian government representatives. The initial design goal was to create a 16-bit standard, and Han unification was therefore a critical step for avoiding tens of thousands of character duplications. This 16-bit requirement was later abandoned, making the size of the character set less of an issue today.
The controversy later extended to the internationally representative ISO: the initial CJK Joint Research Group (CJK-JRG) favored a proposal (DIS 10646) for a non-unified character set, "which was thrown out in favor of unification with the Unicode Consortium's unified character set by the votes of American and European ISO members" (even though the Japanese position was unclear). Endorsing the Unicode Han unification was a necessary step for the heated ISO 10646/Unicode merger.
Much of the controversy surrounding Han unification is based on the distinction between glyphs, as defined in Unicode, and the related but distinct idea of graphemes. Unicode assigns abstract characters (graphemes), as opposed to glyphs, which are a particular visual representations of a character in a specific typeface. One character may be represented by many distinct glyphs, for example a "g" or an "a", both of which may have one loop (, ) or two (, ). Yet for a reader of Latin script based languages the two variations of the "a" character are both recognized as the same grapheme. Graphemes present in national character code standards have been added to Unicode, as required by Unicode's Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are considerably more involved, given the technological limitations under which they evolved, and so the official CJK participants in Han unification may well have been amenable to reform.
Unlike European versions, CJK Unicode fonts, due to Han unification, have large but irregular patterns of overlap, requiring language-specific fonts. Unfortunately, language-specific fonts also make it difficult to access a variant which, as with the "grass" example, happens to appear more typically in another language style. (That is to say, it would be difficult to access "grass" with the four-stroke radical more typical of Traditional Chinese in a Japanese environment, which fonts would typically depict the three-stroke radical.) Unihan proponents tend to favor markup languages for defining language strings, but this would not ensure the use of a specific variant in the case given, only the language-specific font more likely to depict a character as that variant. (At this point, merely stylistic differences do enter in, as a selection of Japanese and Chinese fonts are not likely to be visually compatible.)
Chinese users seem to have fewer objections to Han unification, largely because Unicode did not attempt to unify Simplified Chinese characters with Traditional Chinese characters. (Simplified Chinese characters are used among Chinese speakers in the People's Republic of China, Singapore, and Malaysia. Traditional Chinese characters are used in Hong Kong and Taiwan (Big5) and they are, with some differences, more familiar to Korean and Japanese users.) Unicode is seen as neutral with regards to this politically charged issue, and has encoded Simplified and Traditional Chinese glyphs separately (e.g. the ideograph for "discard" is U+4E1F for Traditional Chinese Big5 #A5E1 and U+4E22 for Simplified Chinese GB #2210). It is also noted that Traditional and Simplified characters should be encoded separately according to Unicode Han Unification rules, because they are distinguished in pre-existing PRC character sets. Furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship.
Alternatives
There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus free from its restrictions:
CNS character set
CCCII character set
TRON
Mojikyo
These region-dependent character sets are also seen as not affected by Han Unification because of their region-specific nature:
ISO/IEC 2022 (based on sequence codes to switch between Chinese, Japanese, Korean character sets – hence without unification)
Big5 extensions
GCCS and its successor HKSCS
However, none of these alternative standards has been as widely adopted as Unicode, which is now the base character set for many new standards and protocols, internationally adopted, and is built into the architecture of operating systems (Microsoft Windows, Apple macOS, and many Unix-like systems), programming languages (Perl, Python, C#, Java, Common Lisp, APL, C, C++), and libraries (IBM International Components for Unicode (ICU) along with the Pango, Graphite, Scribe, Uniscribe, and ATSUI rendering engines), font formats (TrueType and OpenType) and so on.
In March 1989, a (B)TRON-based system was adopted by Japanese government organizations "Center for Educational Computing" as the system of choice for school education including compulsory education. However, in April, a report titled "1989 National Trade Estimate Report on Foreign Trade Barriers" from Office of the United States Trade Representative have specifically listed the system as a trade barrier in Japan. The report claimed that the adoption of the TRON-based system by the Japanese government is advantageous to Japanese manufacturers, and thus excluding US operating systems from the huge new market; specifically the report lists MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft's influence as its former officer Tom Robertson was then offered a lucrative position by Microsoft. While the TRON system itself was subsequently removed from the list of sanction by Section 301 of the Trade Act of 1974 after protests by the organization in May 1989, the trade dispute caused the Ministry of International Trade and Industry to accept a request from Masayoshi Son to cancel the Center of Educational Computing's selection of the TRON-based system for the use of educational computers. The incident is regarded as a symbolic event for the loss of momentum and eventual demise of the BTRON system, which led to the widespread adoption of MS-DOS in Japan and the eventual adoption of Unicode with its successor Windows.
Merger of all equivalent characters
There has not been any push for full semantic unification of all semantically-linked characters, though the idea would treat the respective users of East Asian languages the same, whether they write in Korean, Simplified Chinese, Traditional Chinese, Kyūjitai Japanese, Shinjitai Japanese or Vietnamese. Instead of some variants getting distinct code points while other groups of variants have to share single code points, all variants could be reliably expressed only with metadata tags (e.g., CSS formatting in webpages). The burden would be on all those who use differing versions of , , , , whether that difference be due to simplification, international variance or intra-national variance. However, for some platforms (e.g., smartphones), a device may come with only one font pre-installed. The system font must make a decision for the default glyph for each code point and these glyphs can differ greatly, indicating different underlying graphemes.
Consequently, relying on language markup across the board as an approach is beset with two major issues. First, there are contexts where language markup is not available (code commits, plain text). Second, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In addition to the standard character sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai Japanese, there also exist "ancient" forms of characters that are of interest to historians, linguists and philologists.
Unicode's Unihan database has already drawn connections between many characters. The Unicode database catalogs the connections between variant characters with distinct code points already. However, for characters with a shared code point, the reference glyph image is usually biased toward the Traditional Chinese version. Also, the decision of whether to classify pairs as semantic variants or z-variants is not always consistent or clear, despite rationalizations in the handbook.
So-called semantic variants of (U+4E1F) and (U+4E22) are examples that Unicode gives as differing in a significant way in their abstract shapes, while Unicode lists and as z-variants, differing only in font styling. Paradoxically, Unicode considers and to be near identical z-variants while at the same time classifying them as significantly different semantic variants. There are also cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants: (U+500B) and (U+4E2A). There are cases of non-mutual equivalence. For example, the Unihan database entry for (U+4E80) considers (U+9F9C) to be its z-variant, but the entry for does not list as a z-variant, even though was obviously already in the database at the time that the entry for was written.
Some clerical errors led to doubling of completely identical characters such as (U+FA23) and (U+27EAF). If a font has glyphs encoded to both points so that one font is used for both, they should appear identical. These cases are listed as z-variants despite having no variance at all. Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion. Because round-trip conversion was an early selling point of Unicode, this meant that if a national standard in use unnecessarily duplicated a character, Unicode had to do the same. Unicode calls these intentional duplications "compatibility variants" as with 漢 (U+FA9A) which calls (U+6F22) its compatibility variant. As long as an application uses the same font for both, they should appear identical. Sometimes, as in the case of with U+8ECA and U+F902, the added compatibility character lists the already present version of as both its compatibility variant and its z-variant. The compatibility variant field overrides the z-variant field, forcing normalization under all forms, including canonical equivalence. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode normalization scheme and not only under compatibility normalization. This is similar to how is canonically equivalent to a pre-composed . Much software (such as the MediaWiki software that hosts Wikipedia) will replace all canonically equivalent characters that are discouraged (e.g. the angstrom symbol) with the recommended equivalent. Despite the name, CJK "compatibility variants" are canonically equivalent characters and not compatibility characters.
漢 (U+FA9A) was added to the database later than (U+6F22) was and its entry informs the user of the compatibility information. On the other hand, (U+6F22) does not have this equivalence listed in this entry. Unicode demands that all entries, once admitted, cannot change compatibility or equivalence so that normalization rules for already existing characters do not change.
Some pairs of Traditional and Simplified are also considered to be semantic variants. According to Unicode's definitions, it makes sense that all simplifications (that do not result in wholly different characters being merged for their homophony) will be a form of semantic variant. Unicode classifies and as each other's respective traditional and simplified variants and also as each other's semantic variants. However, while Unicode classifies (U+5104) and (U+4EBF) as each other's respective traditional and simplified variants, Unicode does not consider and to be semantic variants of each other.
Unicode claims that "Ideally, there would be no pairs of z-variants in the Unicode Standard." This would make it seem that the goal is to at least unify all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to language tags. This conflicts with the stated goal of Unicode to take away that overhead, and to allow any number of any of the world's scripts to be on the same document with one encoding system. Chapter One of the handbook states that "With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software, and reduced development costs. While taking the ASCII character set as its starting point, the Unicode Standard goes far beyond ASCII's limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the world – more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility."
That leaves us with settling on one unified reference grapheme for all z-variants, which is contentious since few outside of Japan would recognize and as equivalent. Even within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode would effectively make the PRC's simplification of (U+4FA3) and (U+4FB6) a monumental difference by comparison. Such a plan would also eliminate the very visually distinct variations for characters like (U+76F4) and (U+96C7).
One would expect that all simplified characters would simultaneously also be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the strange case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode's definition is that specialized semantic variants have the same meaning only in certain contexts. Languages use them differently. A pair whose characters are 100% drop-in replacements for each other in Japanese may not be so flexible in Chinese. Thus, any comprehensive merger of recommended code points would have to maintain some variants that differ only slightly in appearance even if the meaning is 100% the same for all contexts in one language, because in another language the two characters may not be 100% drop-in replacements.
Examples of language-dependent glyphs
In each row of the following table, the same character is repeated in all six columns. However, each column is marked (by the lang attribute) as being in a different language: Chinese (simplified and two types of traditional), Japanese, Korean, or Vietnamese. The browser should select, for each character, a glyph (from a font) suitable to the specified language. (Besides actual character variation—look for differences in stroke order, number, or direction—the typefaces may also reflect different typographical styles, as with serif and non-serif alphabets.) This only works for fallback glyph selection if you have CJK fonts installed on your system and the font selected to display this article does not include glyphs for these characters.
No character variant that is exclusive to Korean or Vietnamese has received its own code point, whereas almost all Shinjitai Japanese variants or Simplified Chinese variants each have distinct code points and unambiguous reference glyphs in the Unicode standard.
In the twentieth century, East Asian countries made their own respective encoding standards. Within each standard, there coexisted variants with distinct code points, hence the distinct code points in Unicode for certain sets of variants. Taking Simplified Chinese as an example, the two character variants of (U+5167) and (U+5185) differ in exactly the same way as do the Korean and non-Korean variants of (U+5168). Each respective variant of the first character has either (U+5165) or (U+4EBA). Each respective variant of the second character has either (U+5165) or (U+4EBA). Both variants of the first character got their own distinct code points. However, the two variants of the second character had to share the same code point.
The justification Unicode gives is that the national standards body in the PRC made distinct code points for the two variations of the first character /, whereas Korea never made separate code points for the different variants of . There is a reason for this that has nothing to do with how the domestic bodies view the characters themselves. China went through a process in the twentieth century that changed (if not simplified) several characters. During this transition, there was a need to be able to encode both variants within the same document. Korean has always used the variant of with the (U+5165) radical on top. Therefore, it had no reason to encode both variants. Korean language documents made in the twentieth century had little reason to represent both versions in the same document.
Almost all of the variants that the PRC developed or standardized got distinct code points owing simply to the fortune of the Simplified Chinese transition carrying through into the computing age. This privilege however, seems to apply inconsistently, whereas most simplifications performed in Japan and mainland China with code points in national standards, including characters simplified differently in each country, did make it into Unicode as distinct code points.
Sixty-two Shinjitai "simplified" characters with distinct code points in Japan got merged with their Kyūjitai traditional equivalents, like . This can cause problems for the language tagging strategy. There is no universal tag for the traditional and "simplified" versions of Japanese as there are for Chinese. Thus, any Japanese writer wanting to display the Kyūjitai form of may have to tag the character as "Traditional Chinese" or trust that the recipient's Japanese font uses only the Kyūjitai glyphs, but tags of Traditional Chinese and Simplified Chinese may be necessary to show the two forms side by side in a Japanese textbook. This would preclude one from using the same font for an entire document, however. There are two distinct code points for in Unicode, but only for "compatibility reasons". Any Unicode-conformant font must display the Kyūjitai and Shinjitai versions' equivalent code points in Unicode as the same. Unofficially, a font may display differently with 海 (U+6D77) as the Shinjitai version and 海 (U+FA45) as the Kyūjitai version (which is identical to the traditional version in written Chinese and Korean).
The radical (U+7CF8) is used in characters like /, with two variants, the second form being simply the cursive form. The radical components of (U+7D05) and (U+7EA2) are semantically identical and the glyphs differ only in the latter using a cursive version of the component. However, in mainland China, the standards bodies wanted to standardize the cursive form when used in characters like . Because this change happened relatively recently, there was a transition period. Both (U+7D05) and (U+7EA2) got separate code points in the PRC's text encoding standards bodies so Chinese-language documents could use both version. The two variants received distinct code points in Unicode as well.
The case of the radical (U+8278) proves how arbitrary the state of affairs is. When used to compose characters like (U+8349), the radical was placed at the top, but had two different forms. Traditional Chinese and Korean use a four-stroke version. At the top of should be something that looks like two plus signs (). Simplified Chinese, Kyūjitai Japanese and Shinjitai Japanese use a three-stroke version, like two plus signs sharing their horizontal strokes (, i.e. ). The PRC's text encoding bodies did not encode the two variants differently. The fact that almost every other change brought about by the PRC, no matter how minor, did warrant its own code point suggests that this exception may have been unintentional. Unicode copied the existing standards as is, preserving such irregularities.
The Unicode Consortium has recognized errors in other instances. The myriad Unicode blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flawed importation of the original standards, as well as accidental mergers that are later corrected, providing precedent for dis-unifying characters.
For native speakers, variants can be unintelligible or be unacceptable in educated contexts. English speakers may understand a handwritten note saying "4P5 kg" as "495 kg", but writing the nine backwards (so it looks like a "P") can be jarring and would be considered incorrect in any school. Likewise, to users of one CJK language reading a document with "foreign" glyphs: variants of can appear as mirror images, can be missing a stroke/have an extraneous stroke, and may be unreadable or be confused with depending on which variant of (e.g. ) is used.
Examples of some non-unified Han ideographs
In some cases, often where the changes are the most striking, Unicode has encoded variant characters, making it unnecessary to switch between fonts or lang attributes. However, some variants with arguably minimal differences get distinct codepoints, and not every variant with arguably substantial changes gets a unique codepoint. As an example, take a character such as (U+5165), for which the only way to display the variants is to change font (or lang attribute) as described in the previous table. On the other hand, for (U+5167), the variant of (U+5185) gets a unique codepoint. For some characters, like / (U+514C/U+5151), either method can be used to display the different glyphs. In the following table, each row compares variants that have been assigned different code points. For brevity, note that shinjitai variants with different components will usually (and unsurprisingly) take unique codepoints (e.g., 氣/気). They will not appear here nor will the simplified Chinese characters that take consistently simplified radical components (e.g., 紅/红, 語/语). This list is not exhaustive.
Ideographic Variation Database (IVD)
In order to resolve issues brought by Han unification, a Unicode Technical Standard known as the Unicode Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in plain text environment. By registering glyph collections into the Ideographic Variation Database (IVD), it is possible to use Ideographic Variation Selectors to form Ideographic Variation Sequence (IVS) to specify or restrict the appropriate glyph in text processing in a Unicode environment.
Unicode ranges
Ideographic characters assigned by Unicode appear in the following blocks:
CJK Unified Ideographs (4E00–9FFF) (Otherwise known as URO, abbreviation of Unified Repertoire and Ordering)
CJK Unified Ideographs Extension A (3400–4DBF)
CJK Unified Ideographs Extension B (20000–2A6DF)
CJK Unified Ideographs Extension C (2A700–2B73F)
CJK Unified Ideographs Extension D (2B740–2B81F)
CJK Unified Ideographs Extension E (2B820–2CEAF)
CJK Unified Ideographs Extension F (2CEB0–2EBEF)
CJK Unified Ideographs Extension G (30000–3134F)
CJK Compatibility Ideographs (F900–FAFF) (the twelve characters at FA0E, FA0F, FA11, FA13, FA14, FA1F, FA21, FA23, FA24, FA27, FA28 and FA29 are actually "unified ideographs" not "compatibility ideographs")
Unicode includes support of CJKV radicals, strokes, punctuation, marks and symbols in the following blocks:
CJK Radicals Supplement (2E80–2EFF)
CJK Strokes (31C0–31EF)
CJK Symbols and Punctuation (3000–303F)
Ideographic Description Characters (2FF0–2FFF)
Additional compatibility (discouraged use) characters appear in these blocks:
CJK Compatibility (3300–33FF)
CJK Compatibility Forms (FE30–FE4F)
CJK Compatibility Ideographs (F900–FAFF)
CJK Compatibility Ideographs Supplement (2F800–2FA1F)
Enclosed CJK Letters and Months (3200–32FF)
Enclosed Ideographic Supplement (1F200–1F2FF)
Kangxi Radicals (2F00–2FDF)
These compatibility characters (excluding the twelve unified ideographs in the CJK Compatibility Ideographs block) are included for compatibility with legacy text handling systems and other legacy character sets. They include forms of characters for vertical text layout and rich text characters that Unicode recommends handling through other means.
International Ideographs Core
The International Ideographs Core (IICore) is a subset of 9810 ideographs derived from the CJK Unified Ideographs tables, designed to be implemented in devices with limited memory, input/output capability, and/or applications where the use of the complete ISO 10646 ideograph repertoire is not feasible. There are 9810 characters in the current standard.
Unihan database files
The Unihan project has always made an effort to make available their build database.
The libUnihan project provides a normalized SQLite Unihan database and corresponding C library. All tables in this database are in fifth normal form. libUnihan is released under the LGPL, while its database, UnihanDb, is released under the MIT License.
See also
Chinese character encoding
GB 18030
Sinicization
Z-variant
List of CJK fonts
Allography
Variant Chinese character
Notes
References
Chinese-language computing
Encodings of Japanese
Korean-language computing
Unicode
Natural language and computing
Character encoding |
5614209 | https://en.wikipedia.org/wiki/Nodal%20%28software%29 | Nodal (software) | Nodal is a generative software application for composing music. The software was produced at the Centre for Electronic Media Art (CEMA), Monash University, Australia. It uses a novel method for the notation and playing of MIDI based music. This method is based around the concept of a user-defined graph. The graph consists of nodes (musical events) and edges (connections between events). The composer interactively defines the graph, which is then traversed by any number of virtual players that play the musical events as they encounter them on the graph. The time taken by a player to travel from one node to another is based on the length of the edges that connect the nodes.
Supported Platforms and Versions
Early versions of Nodal were designed to run only on Mac OS X. As of version 1.1 beta (released in 2005), Nodal ran on Mac OS X 10.4, and Microsoft Windows (Vista or XP) operating systems. As of version 1.5, released in November 2009, the software became shareware in order to support its continued development. The current version is 1.9, released in October 2013 which runs on MacOS 10.6 and higher or Windows Vista, 7 and 8. This version has the ability to specify combinations of chords, sequences and randomised patterns within a single node and incorporates the use of scale modes. Nodal can be downloaded from the Nodal web site. It is also available from Apple's Mac App Store.
Working with Nodal
Nodal generates MIDI data as virtual players traverse a user-defined network in real-time. It can be used as a standalone composition tool, in conjunction with Digital audio workstation (DAW) software, or played interactively in a real-time performance.
Nodal contains a built-in MIDI synthesiser and is also compatible with any hardware or software MIDI synthesiser, including all major Digital audio workstation software. Microsoft Windows versions can use the built-in Windows MIDI synthesiser. Nodal is also compatible with Apple's GarageBand software.
Prizes and awards
In 2012, Nodal was awarded the Eureka Prize for Innovation in Computer Science.
See also
List of music software
References
Official site
Computer music software
Music software |
680285 | https://en.wikipedia.org/wiki/Futex | Futex | In computing, a futex (short for "fast userspace mutex") is a kernel system call that programmers can use to implement basic locking, or as a building block for higher-level locking abstractions such as semaphores and POSIX mutexes or condition variables.
A futex consists of a kernelspace wait queue that is attached to an atomic integer in userspace. Multiple processes or threads operate on the integer entirely in userspace (using atomic operations to avoid interfering with one another), and only resort to relatively expensive system calls to request operations on the wait queue (for example to wake up waiting processes, or to put the current process on the wait queue). A properly programmed futex-based lock will not use system calls except when the lock is contended; since most operations do not require arbitration between processes, this will not happen in most cases.
History
On Linux, Hubertus Franke (IBM Thomas J. Watson Research Center), Matthew Kirkwood, Ingo Molnár (Red Hat) and Rusty Russell (IBM Linux Technology Center) originated the futex mechanism. Futexes appeared for the first time in version 2.5.7 of the Linux kernel development series; the semantics stabilized as of version 2.5.40, and futexes have been part of the Linux kernel mainline since the December 2003 release of 2.6.x stable kernel series.
In 2002 discussions took place on a proposal to make futexes accessible via the file system by creating a special node in /dev or /proc. However, Linus Torvalds strongly opposed this idea and rejected any related patches.
Futexes have been implemented in Microsoft Windows since Windows 8 or Windows Server 2012 under the name WaitOnAddress.
In 2013 Microsoft patented futexes and the patent was granted in 2014.
In May 2014 the CVE system announced a vulnerability discovered in the Linux kernel's futex subsystem that allowed denial-of-service attacks or local privilege escalation.
In May 2015 the Linux kernel introduced a deadlock bug via Commit b0c29f79ecea that caused a hang in user applications. The bug affected many enterprise Linux distributions, including 3.x and 4.x kernels, and Red Hat Enterprise Linux version 5, 6 and 7, SUSE Linux 12 and Amazon Linux.
Futexes have been implemented in OpenBSD since 2016.
The futex mechanism is one of the core concepts of the Zircon kernel in Google's Fuchsia operating system since at least April 2018.
Operations
Futexes have two basic operations, WAIT and WAKE.
WAIT(addr, val)
If the value stored at the address addr is val, puts the current thread to sleep.
WAKE(addr, num)
Wakes up num number of threads waiting on the address addr.
For more advanced uses there are a number of other operations the most used being REQUEUE and WAKE_OP, which both function as more generic WAKE operations.
CMP_REQUEUE(old_addr, new_addr, num_wake, num_move, val)
If the value stored at the address old_addr is val, wakes num_wake threads waiting on the address old_addr, and enqueues num_move threads waiting on the address old_addr to now wait on the address new_addr. This can be used to avoid the thundering herd problem on wake.
WAKE_OP(addr1, addr2, num1, num2, op, op_arg, cmp, cmp_arg)
Will read addr2, perform op with op_arg on it, and storing the result back to addr2. Then it will wake num1 threads waiting on addr1, and if the previously read value from addr2 matches cmp_arg using comparison cmp will wake num2 threads waiting on addr2. This very flexible and generic wake mechanism is useful for implementing many synchronization primitives.
See also
Synchronization
Fetch-and-add
Compare and swap
References
External links
- futex() system call
- futex semantics and usage
Hubertus Franke, Rusty Russell, Matthew Kirkwood. Fuss, futexes and furwocks: Fast Userlevel Locking in Linux, Ottawa Linux Symposium 2002.
Bert Hubert (2004). Unofficial Futex manpages
Ingo Molnar. "Robust Futexes", Linux Kernel Documentation
"Priority Inheritance Futexes", Linux Kernel Documentation
Concurrency control
Linux kernel features |
56486532 | https://en.wikipedia.org/wiki/William%20Henry%20Carmichael-Smyth | William Henry Carmichael-Smyth | Major William Henry Carmichael-Smyth (30 July 1780 – 9 September 1861) was a British military officer in the service of the East India Company.
Biography
He was born in England in 1780. His father was James Carmichael Smyth, a Scottish physician and he was educated at Charterhouse School.
In 1797, at the age of seventeen, he was commissioned into the Bengal Artillery. On arriving in Bengal that same year he was deployed on a military expedition to the Philippines. When the expedition was abandoned he returned from Penang and served in Allahabad. In 1803 the Second Anglo-Maratha War broke out, and he was present at the battles of Aligarh, Delhi and Laswari. In May 1804 he accompanied a force against Rampoora, and later served at the Battle of Deeg and Siege of Deeg where he was mentioned in dispatches. In 1805 he was present at the Siege of Bharatpur and after it was abandoned he was made garrison engineer at Agra. The following year in 1806 he directed the attack on Gohud. He returned to England in 1807 due to ill health.
Carmichael-Smyth returned to India in 1810 as a captain, and served in the Invasion of Java in 1811. Thereafter he returned to Bengal and went to Callinger as a field engineer where he was mentioned in dispatches for exemplary valour in 1812. He subsequently was deployed on surveys before he was selected to assist in a campaign against Alwar. Following the campaign he returned to his position as garrison engineer at Agra where he would remain until 1819. During the Anglo-Nepalese War between 1814 and 1816 he served in all the operations under Sir David Ochterlony. In February 1817 he assisted in the reduction of Hathras. That same year he joined the army of Lord Hastings in the Third Anglo-Maratha War. On 13 March 1817 at Cawnpore he married Anne Thackeray, the widow of Richmond Thackeray, and became step-father to a young William Makepeace Thackeray. He returned to England in 1820 and was elevated to Major in 1821.
In 1822 he was appointed Resident Superintendent at East India Company Military Seminary in Addiscombe. He remained in the post until he was succeeded by Robert Houston on 6 April 1824. He died in Ayr, Scotland on 9 September 1861.
References
British East India Company Army officers
1780 births
1861 deaths
People educated at Charterhouse School
British military personnel of the Second Anglo-Maratha War
British military personnel of the Third Anglo-Maratha War
People of British India
Bengal Artillery officers |
355011 | https://en.wikipedia.org/wiki/Icon%20%28computing%29 | Icon (computing) | In computing, an icon is a pictogram or ideogram displayed on a computer screen in order to help the user navigate a computer system. The icon itself is a quickly comprehensible symbol of a software tool, function, or a data file, accessible on the system and is more like a traffic sign than a detailed illustration of the actual entity it represents. It can serve as an electronic hyperlink or file shortcut to access the program or data. The user can activate an icon using a mouse, pointer, finger, or recently voice commands. Their placement on the screen, also in relation to other icons, may provide further information to the user about their usage. In activating an icon, the user can move directly into and out of the identified function without knowing anything further about the location or requirements of the file or code.
Icons as parts of the graphical user interface of the computer system, in conjunction with windows, menus and a pointing device (mouse), belong to the much larger topic of the history of the graphical user interface that has largely supplanted the text-based interface for casual use.
Overview
The computing definition of "icon" can include three distinct semiotical elements:
Icon, which resembles its referent (such as a road sign for falling rocks).
This category includes stylized drawings of objects from the office environment or from other professional areas such as printers, scissors, file cabinets and folders.
Index, which is associated with its referent (smoke is a sign of fire).
This category includes stylized drawing used to refer to an actions "printer" and "print", "scissors" and "cut" or "magnifying glass" and "search".
Symbol, which is related to its referent only by convention (letters, musical notation, mathematical operators etc.).
This category includes standardized symbols found across many electronic devices, such as the power on/off symbol and the USB icon.
The majority of icons are encoded and decoded using metonymy, synecdoche, and metaphor.
An example of metaphorical representation characterizes all the major desktop-based computer systems including desktop that uses an iconic representation of objects from the 1980s office environment to transpose attributes from a familiar context/object to an unfamiliar one.
Metonymy is in itself a subset of metaphors that use one entity to point to another related to it such as using a fluorescent bulb instead of a filament one to represent power saving settings.
Synecdoche is considered as a special case of metonymy, in the usual sense of the part standing for the whole such as a single component for the entire system, speaker driver for the entire audio system settings.
Additionally, a group of icons can be categorised as brand icons, used to identify commercial software programs and are related to the brand identity of a company or software. These commercial icons serve as functional links on the system to the program or data files created by a specific software provider. Although icons are usually depicted in graphical user interfaces, icons are sometimes rendered in a TUI using special characters such as MouseText or PETSCII.
The design of all computer icons is constricted by the limitations of the device display. They are limited in size, with the standard size about a thumbnail for both desktop computer systems and mobile devices. They are frequently scalable, as they are displayed in different positions in the software, a single icon file such as the Apple Icon Image format can include multiple versions of the same icon optimized to work at a different size, in colour or grayscale as well as on dark and bright backgrounds.
The colors used, of both the image and the icon background, should stand out on different system backgrounds and among each other. The detailing of the icon image needs to be simple, remaining recognizable in varying graphical resolutions and screen sizes. Computer icons are by definition language-independent but often not culturally independent; they do not rely on letters or words to convey their meaning. These visual parameters place rigid limits on the design of icons, frequently requiring the skills of a graphic artist in their development.
Because of their condensed size and versatility, computer icons have become a mainstay of user interaction with electronic media. Icons also provide rapid entry into the system functionality. On most systems, users can create and delete, replicate, select, click or double-click standard computer icons and drag them to new positions on the screen to create a customized user environment.
Types
Standardized electrical device symbols
A series of recurring computer icons are taken from the broader field of standardized symbols used across a wide range of electrical equipment. Examples of these are the power symbol and the USB icon, which are found on a wide variety of electronic devices. The standardization of electronic icons is an important safety-feature on all types of electronics, enabling a user to more easily navigate an unfamiliar system. As a subset of electronic devices, computer systems and mobile devices use many of the same icons; they are corporated into the design of both the computer hardware and on the software. On the hardware, these icons identify the functionality of specific buttons and plugs. In the software, they provide a link into the customizable settings.
System warning icons also belong to the broader area of ISO standard warning signs. These warning icons, first designed to regulate automobile traffic in the early 1900s, have become standardized and widely understood by users without the necessity of further verbal explanations. In designing software operating systems, different companies have incorporated and defined these standard symbols as part of their graphical user interface. For example, the Microsoft MSDN defines the standard icon use of error, warning, information and question mark icons as part of their software development guidelines.
Different organizations are actively involved in standardizing these icons, as well as providing guidelines for their creation and use. The International Electrotechnical Commission (IEC) has defined "Graphical symbols for use on equipment", published as IEC 417, a document which displays IEC standardized icons. Another organization invested in the promotion of effective icon usage is the ICT (information and communications technologies), which has published guidelines for the creation and use of icons. Many of these icons are available on the Internet, either to purchase or as freeware to incorporate into new software.
Metaphorical icons
An icon is a Signifier pointing to a Signified.
Easily comprehendible icons will make use of familiar visual metaphors directly connected to the Signified: actions the icon initiate or the content that would be revealed. Metaphors, Metonymy and Synecdoche are used to encode the meaning in an icon system.
The Signified can have multiple natures: virtual objects such as Files and Applications, actions within a system or an application (e.g. snap a picture, delete, rewind, connect/disconnect etc...), action in the physical world (e.g. print, eject DVD, change volume or brightness etc...) as well as physical objects (e.g. monitor, compact disk, mouse, printer etc...).
The Desktop metaphor
A subgroup of the more visually rich icons is based on objects lifted from a 1970 physical office space and desktop environment. It includes the basic icons used for a file, file folder, trashcan, inbox, together with the spatial real estate of the screen, i.e. the electronic desktop. This model originally enabled users, familiar with common office practices and functions, to intuitively navigate the computer desktop and system. (Desktop Metaphor, pg 2). The icons stand for objects or functions accessible on the system and enable the user to do tasks common to an office space. These desktop computer icons developed over several decades; data files in the 1950s, the hierarchical storage system (i.e. the file folder and filing cabinet) in the 1960s, and finally the desktop metaphor itself (including the trashcan) in the 1970s.
Dr. David Canfield Smith associated the term "icon" with computing in his landmark 1975 PhD thesis "Pygmalion: A Creative Programming Environment". In his work, Dr. Smith envisioned a scenario in which "visual entities", called icons, could execute lines of programming code, and save the operation for later re-execution. Dr. Smith later served as one of the principal designers of the Xerox Star, which became the first commercially available personal computing system based on the desktop metaphor when it was released in 1981. "The icons on [the desktop] are visible concrete embodiments of the corresponding physical objects." The desktop and icons displayed in this first desktop model are easily recognizable by users several decades later, and display the main components of the desktop metaphor GUI.
This model of the desktop metaphor has been adopted by most personal computing systems in the last decades of the 20th century; it remains popular as a "simple intuitive navigation by single user on single system." It is only at the beginning of the 21st century that personal computing is evolving a new metaphor based on Internet connectivity and teams of users, cloud computing. In this new model, data and tools are no longer stored on the single system, instead they are stored someplace else, "in the cloud". The cloud metaphor is replacing the desktop model; it remains to be seen how many of the common desktop icons (file, file folder, trashcan, inbox, filing cabinet) find a place in this new metaphor.
Brand icons for commercial software
A further type of computer icon is more related to the brand identity of the software programs available on the computer system. These brand icons are bundled with their product and installed on a system with the software. They function in the same way as the hyperlink icons described above, representing functionality accessible on the system and providing links to either a software program or data file. Over and beyond this, they act as a company identifier and advertiser for the software or company.
Because these company and program logos represent the company and product itself, much attention is given to their design, done frequently by commercial artists. To regulate the use of these brand icons, they are trademark registered and are considered part of the company intellectual property.
In closed systems such as iOS and Android, the use of icons is to a degree regulated or guided
to create a sense of consistency in the UI.
Overlay icons
On some GUI systems (e.g. Windows), on an icon which represents an object (e.g. a file) a certain additional subsystem can add a smaller secondary icon, laid over the primary icon and usually positioned in one of its corners, to indicate the status of the object which is represented with the primary icon. For instance, the subsystem for locking files can add a "padlock" overlay icon on an icon which represents a file in order to indicate that the file is locked.
Placement and spacing
In order to display the number of icons representing the growing complexity offered on a device, different systems have come up with different solutions for screen space management. The computer monitor continues to display primary icons on the main page or desktop, allowing easy and quick access to the most commonly used functions for a user. This screen space also invites almost immediate user customization, as the user adds favourite icons to the screen and groups related icons together on the screen. Secondary icons of system programs are also displayed on the task bar or the system dock. These secondary icons do not provide a link like the primary icons, instead, they are used to show availability of a tool or file on the system.
Spatial management techniques play a bigger role in mobile devices with their much smaller screen real estate. In response, mobile devices have introduced, among other visual devices, scrolling screen displays and selectable tabs displaying groups of related icons. Even with these evolving display systems, the icons themselves remain relatively constant in both appearance and function.
Above all, the icon itself must remain clearly identifiable on the display screen regardless of its position and size. Programs might display their icon not only as a desktop hyperlink, but also in the program title bar, on the Start menu, in the Microsoft tray or the Apple dock. In each of these locations, the primary purpose is to identify and advertise the program and functionality available. This need for recognition in turn sets specific design restrictions on effective computer icons.
Design
In order to maintain consistency in the look of a device, OS manufacturers offer detailed guidelines for the development and use of icons on their systems. This is true for both standard system icons and third party application icons to be included in the system. The system icons currently in use have typically gone through widespread international acceptance and understandability testing. Icon design factors have also been the topic for extensive usability studies. The design itself involves a high level of skill in combining an attractive graphic design with the required usability features.
Shape
The icon needs to be clear and easily recognizable, able to display on monitors of widely varying size and resolutions. Its shape should be simple with clean lines, without too much detailing in the design. Together with the other design details, the shape also needs to make it unique on the display and clearly distinguishable from other icons.
Color
The icon needs to be colorful enough to easily pick out on the display screen, and contrast well with any background. With the increasing ability to customize the desktop, it is important for the icon itself to display in a standard color which cannot be modified, retaining its characteristic appearance for immediate recognition by the user. Through color it should also provide some visual indicator as to the icon state; activated, available or currently not accessible ("greyed out").
Size and scalability
The standard icon is generally the size of an adult thumb, enabling both easy visual recognition and use in a touchscreen device. For individual devices the display size correlates directly to the size of the screen real estate and the resolution of the display. Because they are used in multiple locations on the screen, the design must remain recognizable at the smallest size, for use in a directory tree or title bar, while retaining an attractive shape in the larger sizes. In addition to scaling, it may be necessary to remove visual details or simplify the subject between discrete sizes. Larger icons serve also as part of the accessibility features for the visually impaired on many computer systems. The width and height of the icon are the same (1:1 aspect ratio) in almost all areas of traditional use.
Motion
Icons can also be augmented with iconographic motion - geometric manipulations applied to a graphical element over time, for example, a scale, rotation, or other deformation. One example is when application icons "wobble" in iOS to convey to the user they are able to be repositioned by being dragged. This is different from an icon with animated graphics, such as a Throbber. In contrast to static icons and icons with animated graphics, kinetic behaviors do not alter the visual content of an element (whereas fades, blurs, tints, and addition of new graphics, such as badges, exclusively alter an icon's pixels). Stated differently, pixels in an icon can be moved, rotated, stretched, and so on - but not altered or added to. Research has shown iconographic motion can act as a powerful and reliable visual cue, a critical property for icons to embody.
Localization
In its primary function as a symbolic image, the icon design should ideally be divorced from any single language. For products which are targeting the international marketplace, the primary design consideration is that the icon is non-verbal; localizing text in icons is costly and time-consuming.
Cultural context
Beyond text, there are other design elements which can be dependent upon the cultural context for interpretation. These include color, numbers, symbols, body parts and hand gestures. Each of these elements needs to be evaluated for their meaning and relevance across all markets targeted by the product.
Related visual tools
Other graphical devices used in the computer user interface fulfill GUI functions on the system similar to the computer icons described above. However each of these related graphical devices differs in one way or another from the standard computer icon.
Windows
The graphical windows on the computer screen share some of the visual and functional characteristics of the computer icon. Windows can be minimized to an icon format to serve as a hyperlink to the window itself. Multiple windows can be open and even overlapping on the screen. However where the icon provides a single button to initiate some function, the principal function of the window is a workspace, which can be minimized to an icon hyperlink when not in use.
Control widgets
Over time, certain GUI widgets have gradually appeared which are useful in many contexts. These are graphical controls which are used across computer systems and can be intuitively manipulated by the user even in a new context because the user recognises them from having seen them in a more familiar context. Examples of these control widgets are scroll bars, sliders, listboxes and buttons used in many programs. Using these widgets, a user is able to define and manipulate the data and the display for the software program they are working with. The first set of computer widgets was originally developed for the Xerox Alto. Now they are commonly bundled in widget toolkits and distributed as part of a development package. These control widgets are standardized pictograms used in the graphical interface, they offer an expanded set of user functionalities beyond the hyperlink function of computer icons.
Emoticons
Another GUI icon is exemplified by the smiley face, a pictogram embedded in a text message. The smiley, and by extension other emoticons, are used in computer text to convey information in a non-verbal binary shorthand, frequently involving the emotional context of the message. These icons were first developed for computers in the 1980s as a response to the limited storage and transmission bandwidth used in electronic messaging. Since then they have become both abundant and more sophisticated in their keyboard representations of varying emotions. They have developed from keyboard character combinations into real icons. They are widely used in all forms of electronic communications, always with the goal of adding context to the verbal content of the message. In adding an emotional overlay to the text, they have also enabled electronic messages to substitute for and frequently supplant voice-to-voice messaging.
These emoticons are very different from the icon hyperlinks described above. They do not serve as links, are not part of any system function or computer software. Instead they are part of the communication language of users across systems. For these computer icons, customization and modifications are not only possible but in fact expected of the user.
Hyperlinks
A text hyperlink performs much the same function as the functional computer icon: it provides a direct link to some function or data available on the system. Although they can be customized, these text hyperlinks generally share a standardized recognizable format, blue text with underlining. Hyperlinks differ from the functional computer icons in that they are normally embedded in text, whereas icons are displayed as stand-alone on the screen real estate. They are also displayed in text, either as the link itself or a friendly name, whereas icons are defined as being primarily non-textual.
X-(FF com class)
Because of the design requirements, icon creation can be a time-consuming and costly process. There are a plethora of icon creation tools to be found on the Internet, ranging from professional level tools through utilities bundled with software development programs to stand-alone freeware. Given this wide availability of icon tools and icon sets, a problem can arise with custom icons which are mismatched in style to the other icons included on the system.
Tools
Icons underwent a change in appearance from the early 8-bit pixel art used pre-2000 to a more photorealistic appearance featuring effects such as softening, sharpening, edge enhancement, a glossy or glass-like appearance, or drop shadows which are rendered with an alpha channel.
Icon editors used on these early platforms usually contain a rudimentary raster image editor capable of modifying images of an icon pixel by pixel, by using simple drawing tools, or by applying simple image filters. Professional icon designers seldom modify icons inside an icon editor and use a more advanced drawing or 3D modeling application instead.
The main function performed by an icon editor is generation of icons from images. An icon editor resamples a source image to the resolution and color depth required for an icon. Other functions performed by icon editors are icon extraction from executable files (exe, dll), creation of icon libraries, or saving individual images of an icon.
All icon editors can make icons for system files (folders, text files, etc.), and for web pages.
These have a file extension of .ICO for Windows and web pages or .ICNS for the Macintosh. If the editor can also make a cursor, the image can be saved with a file extension of .CUR or .ANI for both Windows and the Macintosh. Using a new icon is simply a matter of moving the image into the correct file folder and using the system tools to select the icon. In Windows XP you could go to My Computer, open Tools on the explorer window, choose Folder Options, then File Types, select a file type, click on Advanced and select an icon to be associated with that file type.
Developers also use icon editors to make icons for specific program files. Assignment of an icon to a newly created program is usually done within the Integrated Development Environment used to develop that program. However, if one is creating an application in the Windows API he or she can simply add a line to the program's resource script before compilation. Many icon editors can copy a unique icon from a program file for editing. Only a few can assign an icon to a program file, a much more difficult task.
Simple icon editors and image-to-icon converters are also available online as web applications.
List of tools
This is a list of notable computer icon software.
Axialis IconWorkshop – Supports both Windows and Mac icons. (Commercial, Windows)
IcoFX – Icon editor supporting Windows Vista and Macintosh icons with PNG compression (Commercial, Windows)
IconBuilder – Plug-in for Photoshop; focused on Mac. (Commercial, Windows/Mac)
Microangelo Toolset – a set of tools (Studio, Explorer, Librarian, Animator, On Display) for editing Windows icons and cursors. (Commercial, Windows)
Greenfish icon editor – An icon editor, raster image editor and cursor editor that also supports icon libraries. (Free Software, Windows)
The following is a list of raster graphic applications capable of creating and editing icons:
GIMP – Image Editor Supports reading and writing Windows ICO files and PNG files that can be converted to Mac .icns files. (Open Source, Free Software, Multi-Platform)
ImageMagick and GraphicsMagick – Command Line image conversion & generation that can be used createWindows ICO files and PNG files that can be converted to Mac .ICNS files. (Open Source, Free Software, Multi-Platform)
IrfanView – Support converting graphic file formats into Windows ICO files. (Proprietary, free for non-commercial use, Windows)
ResEdit – Supports creating classic Mac OS icon resources. (Proprietary, Discontinued, Classic Mac OS)
See also
Apple Icon Image format
Distinguishable interfaces
Earcon
Favicon
Font Awesome
ICO (file format)
Icon design
Iconfinder
Resource (Windows)
Semasiography
The Noun Project
Unicode symbols
WIMP (computing)
XPM
References
Further reading
Wolf, Alecia. 2000. "Emotional Expression Online: Gender Differences in Emoticon
Katz, James E., editor (2008). Handbook of Mobile Communication Studies. MIT Press, Cambridge, Massachusetts.
Levine, Philip and Scollon, Ron, editors (2004). Discourse & Technology: Multimodal Discourse Analysis. Georgetown University Press, Washington, D.C.
Abdullah, Rayan and Huebner, Roger (2006). Pictograms, Icons and Signs: A Guide to Information Graphics. Thames & Hudson, London.
Handa, Carolyn (2004). Visual Rhetoric in a Digital World: A Critical Sourcebook. Bedford / St. Martins, Boston.
Zenon W. Pylyshyn and Liam J. Bannon (1989). Perspectives on the Computer Revolution. Ablex, New York.
External links
Graphical user interface elements
Pictograms |
58400960 | https://en.wikipedia.org/wiki/Element%20%28software%29 | Element (software) | Element (formerly Riot and Vector) is a free and open-source software instant messaging client implementing the Matrix protocol.
Element supports end-to-end encryption, groups and sharing of files between users. It is available as a web application, as desktop apps for all major operating systems and as a mobile app for Android and iOS.
History
Element was originally known as Vector when it was released from beta in 2016. The app was renamed to Riot in September of the same year.
In 2016 the first implementation of the Matrix end-to-end encryption was implemented and rolled out as a beta to users. In May 2020, the developers announced enabling end-to-end encryption by default in Riot for new non-public conversations.
In April 2019, a new application was released on the Google Play Store in response to cryptographic keys used to sign the Riot Android app being compromised.
In July 2020, Riot was renamed to Element.
In January 2021, Element was briefly suspended from Google Play Store in response to a report of user-submitted abusive content on Element's default server, matrix.org. Element staff rectified the issue and the app was brought back to the Play Store.
Technology
Element is built with the Matrix React SDK, which is a React-based software development kit to ease the development of Matrix clients. Element is reliant on web technologies and uses Electron for bundling the app for Windows, MacOS and Linux. The Android and iOS clients are developed and distributed with their respective platform tools.
On Android the app is available both in the Google Play Store and the free-software only F-Droid Archives, with minor modifications. For instance, the F-Droid version does not contain the proprietary Google Cloud Messaging plug-in.
Features
Element is able to bridge other communications into the app via Matrix, including IRC, Slack, Telegram, Jitsi Meet and others. Also, it integrates voice and video peer-to-peer and group chats via WebRTC.
Element supports end-to-end encryption (E2EE) of both one-to-one and group chats.
Reception
Media compared Element to Slack, WhatsApp and other instant messaging clients.
In 2017, German computer magazine Golem.de called Element (then Riot) and Matrix server "mature" and "feature-rich", but criticized its key authentication at the time to be not user-friendly for communicatees owning multiple devices. A co-founder of the project, Matthew Hodgson, assured the key verification process was a "placeholder" solution to work on. In 2020, Element added key cross-signing to make the verification process simpler, and enabled end-to-end encryption by default.
See also
Matrix
IRC
Rich Communication Services (RCS)
Session Initiation Protocol (SIP)
XMPP
References
External links
Communication software
Cross-platform software
Free and open-source Android software
Free instant messaging clients
Mobile instant messaging clients
IOS software
Linux software
macOS software
Windows software |
21937273 | https://en.wikipedia.org/wiki/Vnmr | Vnmr | vnmr is software for controlling nuclear magnetic resonance spectrometers. It is produced by Varian, Inc.
The software runs on SPARC machines with Solaris. It is composed of a command line interpreter window, an output window, and a status window.
The successor of vnmr is called vnmrJ.
Buttons
There are a number of buttons to perform specific tasks.
Acqi gives access to other menus :
insertion of the sample in the NMR machine
setting up of the lock
setting up of the shims
Command line interpreter
Main commands:
jexp1 : join experiment 1
go : start an acquisition
wft : Weighted Fourier Transform
svf : save file, saved files get a ".fid" extension
lf : list files
movetof : move time offset
pwd : show current directory
res : give resolution values of a peak
abc : automatic baseline correction
Beginning with "d" are a number of commands for displaying things:
dscale : display scale bar
dg: display parameters
dpf: display peak frequency
dpir: display peak integrals
...
Integration :
ins =, integration surface (ins=100 for percentage, ins=9 for an integration on 9 protons)
dpirn = display partial integral normalized
See also
Comparison of NMR software
External links
Vnmr on NMR wiki
Command-line software
Nuclear magnetic resonance software
Solaris software |
8223796 | https://en.wikipedia.org/wiki/Internet%20safety | Internet safety | Internet safety or online safety or cyber safety and E-Safety is trying to be safe on the internet and is the act of maximizing a user's awareness of personal safety and security risks to private information and property associated with using the internet, and the self-protection from computer crime.
As the number of internet users continues to grow worldwide, internets, governments and organizations have expressed concerns about the safety of children and teenagers using the Internet. Over 45% have announced they have endured some sort of cyber-harassment. Safer Internet Day is celebrated worldwide in February to raise awareness about internet safety. In the UK the Get Safe Online campaign has received sponsorship from government agency Serious Organized Crime Agency (SOCA) and major Internet companies such as Microsoft and eBay.
Information security
Sensitive information such as personal information and identity, passwords are often associated with personal property and privacy and may present security concerns if leaked. Unauthorized access and usage of private information may result in consequence such as identity theft, as well as theft of property. Common causes of information security breaches include:
Phishing
Phishing is a type of scam where the scammers disguise as a trustworthy source in attempt to obtain private information such as passwords, and credit card information, etc. through the internet. These fake websites are often designed to look identical to their legitimate counterparts to avoid suspicion from the user.
Malware
Malware, particularly spyware, is malicious software designed to collect and transmit private information, such as password, without the user's consent or knowledge. They are often distributed through e-mail, software and files from unofficial locations. Malware is one of the most prevalent security concerns as often it is impossible to determine whether a file is infected, regardless of the source of the file.
Personal safety
The growth of the internet gave rise to many important services accessible to anyone with a connection. One of these important services is digital communication. While this service allowed communication with others through the internet, this also allowed the communication with malicious users. While malicious users often use the internet for personal gain, this may not be limited to financial/material gain. This is especially a concern to parents and children, as children are often targets of these malicious users. Common threats to personal safety include: phishing, internet scams, malware, cyberstalking, cyberbullying, online predators and sextortion.
Cyberstalking
Cyberstalking is the use of the Internet or other electronic means to stalk or harass an individual, group, or organization. It may include false accusations, defamation, slander and libel. It may also include monitoring, identity theft, threats, vandalism, solicitation for sex, or gathering information that may be used to threaten, embarrass or harass.Cyberstalking is a crime in which someone harasses or stalks a victim using electronic or digital means, such as social media, email, instant messaging (IM), or messages posted to a discussion group or forum. ... The terms cyberstalking and cyberbullying are often used interchangeably.
Cyberbullying
Cyberbullying is the use of electronic means such as instant messaging, social media, e-mail and other forms of online communication with the intent to abuse, intimidate, or overpower an individual or group. In a 2012 study of over 11,925 students in the United States, it was indicated that 23% of adolescents reported being a victim of cyberbullying, 30% of which reported experiencing suicidal behavior. The Australian eSafety Commissioner's website reports that 20% of young Australians report being socially excluded, threatened or abused online.
Sometimes, this takes the form of posting unverifiable and illegal libelous statements on harassment websites. These websites then run advertisements encouraging the victims to pay thousands of dollars to related businesses to get the posts removed – temporarily, as opposed to the free and permanent removal process available through major web search engines.
Online predation
Online predation is the act of engaging an underage minor into inappropriate sexual relationships through the internet. Online predators may attempt to initiate and seduce minors into relationships through the use of chat rooms or internet forums.
Obscene/offensive content
Various websites on the internet contain material that some deem offensive, distasteful or explicit, which may often be not of the user's liking. Such websites may include internet, shock sites, hate speech or otherwise inflammatory content. Such content may manifest in many ways, such as pop-up ads and unsuspecting links.
Sextortion
Sextortion, especially via the use of webcams, is a concern, especially for those who use webcams for flirting and cybersex. Often this involves a cybercriminal posing as someone else - such as an attractive person - initiating communication of a sexual nature with the victim. The victim is then persuaded to undress in front of a webcam, and may also be persuaded to engage in sexual behaviour, such as masturbation. The video is recorded by the cybercriminal, who then reveals their true intent and demands money or other services (such as more explicit images of the victim, in cases of online predation), threatening to publicly release the video and send it to family members and friends of the victim if they do not comply. A video highlighting the dangers of sextortion has been released by the National Crime Agency in the UK to educate people, especially given the fact that blackmail of a sexual nature may cause humiliation to a sufficient extent to cause the victim to take their own life, in addition to other efforts to educate the public on the risks of sextortion.
See also
Control software:
Accountability software
Content control software
Identity fraud
Internet crime
Internet fraud
Internet security
Procurement through online dating services
Website reputation rating tools
Groups and individuals working on the topic
AHTCC – Australian High Tech Crime Centre
Childnet
Insafe
Sonia Livingstone
ThinkUKnow
Tween summit
Youth Internet Safety Survey
References
External links
Crime prevention
Internet culture |
8542291 | https://en.wikipedia.org/wiki/Winston%20Churchill%20High%20School%20%28Livonia%2C%20Michigan%29 | Winston Churchill High School (Livonia, Michigan) | Churchill High School, named after Winston Churchill, is one of the four main public high schools (and the most recently built) in the city of Livonia, Michigan, a western suburb of Detroit. The school was created in 1968 as an add-on to the other high schools in Livonia in response to the population boom that the city saw at the time. The first school year (1968–69), a sophomore class attended classes at nearby Franklin High School. Beginning in the 1969–70 school year, classes were then held in the new building with a junior and sophomore class. The first graduating class graduated in June 1971. The school is home to the MSC (Math, Science, and Computers) program as well as the Creative and Performing Arts program (CAPA). It also has a wide variety of athletics. The Girls' Cross-Country team finished second in the state of Michigan in 2006, and the Girls' Varsity Volleyball team won the 2007 state championship. The Livonia Career Technical Center is across the street, providing all Livonia Public School students the opportunity to engage in many hands-on activities.
Math, Science, and Computer program
The Livonia Public School District provides magnet programs for the academically talented student, in grade 9 through grade 12. The magnet programs offer students the opportunity to experience an appropriately accelerated, integrated curriculum in an enriched environment with their intellectual peers. Teachers and support staffs are sensitive to the developmental issues of the particular age as well as the social/emotional issues of highly talented youngsters.
Teachers in the magnet programs at each level meet as a team on a regularly scheduled basis to integrate curricular content. Child study meetings dealing with social/emotional needs are conducted regularly in the programs.
The magnet programs include programming at the elementary level, 1–6, ACAT; middle school level, 7–8, MACAT; the high school level, 9–12 in the Math, Science and Computer Program (MSC) which is housed at Churchill High School.
The Math/Science/Computer Program, MSC, located at Churchill High School, is a four-year program that begins with a thirty-student ninth grade class. Admissions are based on previous standardized testing and a special test usually administered in a student's second year of junior high school, during the month of December. A number of students who are not initially accepted into the program (known as "alternates") may participate in a limited number of MSC classes, or may be allowed full entry if students drop out of the program at the end of a school year.
The curriculum is specifically designed for students who have an intense interest in math and science. The accelerated courses are taught at a faster pace and in greater depth. The Advance Placement, AP, Program gives students the opportunity to pursue college-level studies and receive advance placement and/or credit upon entering college. Many of the students also participate in Churchill High School's Accelerated Language Arts Program.
Notable alumni
Charlie LeDuff, journalist and winner of Pulitzer Prize
Zach Gowen, former WWE and TNA Wrestling star
Judy Greer, actress and former member of CAPA
Ryan Kesler, former professional hockey player of the Anaheim Ducks and Olympic silver medalist on Team USA
Jonathan B. Wright, actor and former member of CAPA, starred in the original Broadway production of Spring Awakening.
Derek Grant, Current Drummer of Alkaline Trio and former drummer of The Suicide Machines among other notable bands
Chris Conner, professional hockey player
Shawn Tinnes, Folk musician known by her stage name Sista Otis
Torey Krug, professional hockey player for the St. Louis Blues
Adam Bedell, member of the MLS Columbus Crew
References
External links
Churchill High School website
Livonia Public Schools
Public high schools in Michigan
Educational institutions established in 1969
Livonia, Michigan
Schools in Wayne County, Michigan
Magnet schools in Michigan
1969 establishments in Michigan |
12590 | https://en.wikipedia.org/wiki/Grace%20Hopper | Grace Hopper | Grace Brewster Murray Hopper (; December 9, 1906 – January 1, 1992) was an American computer scientist and United States Navy rear admiral. One of the first programmers of the Harvard Mark I computer, she was a pioneer of computer programming who invented one of the first linkers. Hopper was the first to devise the theory of machine-independent programming languages, and the FLOW-MATIC programming language she created using this theory was later extended to create COBOL, an early high-level programming language still in use today.
Prior to joining the Navy, Hopper earned a Ph.D. in mathematics from Yale University and was a professor of mathematics at Vassar College. Hopper attempted to enlist in the Navy during World War II but was rejected because she was 34 years old. She instead joined the Navy Reserves. Hopper began her computing career in 1944 when she worked on the Harvard Mark I team led by Howard H. Aiken. In 1949, she joined the Eckert–Mauchly Computer Corporation and was part of the team that developed the UNIVAC I computer. At Eckert–Mauchly she managed the development of one of the first COBOL compilers. She believed that a programming language based on English was possible. Her compiler converted English terms into machine code understood by computers. By 1952, Hopper had finished her program linker (originally called a compiler), which was written for the A-0 System. During her wartime service, she co-authored three papers based on her work on the Harvard Mark 1.
In 1954, Eckert–Mauchly chose Hopper to lead their department for automatic programming, and she led the release of some of the first compiled languages like FLOW-MATIC. In 1959, she participated in the CODASYL consortium, which consulted Hopper to guide them in creating a machine-independent programming language. This led to the COBOL language, which was inspired by her idea of a language being based on English words. In 1966, she retired from the Naval Reserve, but in 1967 the Navy recalled her to active duty. She retired from the Navy in 1986 and found work as a consultant for the Digital Equipment Corporation, sharing her computing experiences.
The U.S. Navy guided-missile destroyer was named for her, as was the Cray XE6 "Hopper" supercomputer at NERSC. During her lifetime, Hopper was awarded 40 honorary degrees from universities across the world. A college at Yale University was renamed in her honor. In 1991, she received the National Medal of Technology. On November 22, 2016, she was posthumously awarded the Presidential Medal of Freedom by President Barack Obama.
Early life and education
Grace Brewster Murray was born in New York City. She was the eldest of three children. Her parents, Walter Fletcher Murray and Mary Campbell Van Horne, were of Scottish and Dutch descent, and attended West End Collegiate Church. Her great-grandfather, Alexander Wilson Russell, an admiral in the US Navy, fought in the Battle of Mobile Bay during the Civil War.
Grace was very curious as a child; this was a lifelong trait. At the age of seven, she decided to determine how an alarm clock worked and dismantled seven alarm clocks before her mother realized what she was doing (she was then limited to one clock). For her preparatory school education, she attended the Hartridge School in Plainfield, New Jersey. Grace was initially rejected for early admission to Vassar College at age 16 (because her test scores in Latin were too low), but she was admitted the following year. She graduated Phi Beta Kappa from Vassar in 1928 with a bachelor's degree in mathematics and physics and earned her master's degree at Yale University in 1930.
In 1930 Grace Murray married New York University professor Vincent Foster Hopper (1906–1976); they divorced in 1945. Although she did not marry again, she retained his surname.
In 1934, Hopper earned a Ph.D. in mathematics from Yale under the direction of Øystein Ore. Her dissertation, "New Types of Irreducibility Criteria", was published that same year. She began teaching mathematics at Vassar in 1931, and was promoted to associate professor in 1941.
Career
World War II
Hopper tried to enlist in the Navy early in World War II. She was rejected for a few reasons. At age 34, she was too old to enlist, and her weight to height ratio was too low. She was also denied on the basis that her job as a mathematician and mathematics professor at Vassar College was valuable to the war effort. During the war in 1943, Hopper obtained a leave of absence from Vassar and was sworn into the United States Navy Reserve; she was one of many women who volunteered to serve in the WAVES. She had to get an exemption to enlist; she was below the Navy minimum weight of . She reported in December and trained at the Naval Reserve Midshipmen's School at Smith College in Northampton, Massachusetts. Hopper graduated first in her class in 1944, and was assigned to the Bureau of Ships Computation Project at Harvard University as a lieutenant, junior grade. She served on the Mark I computer programming staff headed by Howard H. Aiken. Hopper and Aiken co-authored three papers on the Mark I, also known as the Automatic Sequence Controlled Calculator. Hopper's request to transfer to the regular Navy at the end of the war was declined due to her advanced age of 38. She continued to serve in the Navy Reserve. Hopper remained at the Harvard Computation Lab until 1949, turning down a full professorship at Vassar in favor of working as a research fellow under a Navy contract at Harvard.
UNIVAC
In 1949, Hopper became an employee of the Eckert–Mauchly Computer Corporation as a senior mathematician and joined the team developing the UNIVAC I. Hopper also served as UNIVAC director of Automatic Programming Development for Remington Rand. The UNIVAC was the first known large-scale electronic computer to be on the market in 1950, and was more competitive at processing information than the Mark I.
When Hopper recommended the development of a new programming language that would use entirely English words, she "was told very quickly that [she] couldn't do this because computers didn't understand English." Still, she persisted. "It's much easier for most people to write an English statement than it is to use symbols," she explained. "So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code."
Her idea was not accepted for three years. In the meantime, she published her first paper on the subject, compilers, in 1952. In the early 1950s, the company was taken over by the Remington Rand corporation, and it was while she was working for them that her original compiler work was done. The program was known as the A compiler and its first version was A-0.
In 1952, she had an operational link-loader, which at the time was referred to as a compiler. She later said that "Nobody believed that," and that she "had a running compiler and nobody would touch it. They told me computers could only do arithmetic." She goes on to say that her compiler "translated mathematical notation into machine code. Manipulating symbols was fine for mathematicians but it was no good for data processors who were not symbol manipulators. Very few people are really symbol manipulators. If they are they become professional mathematicians, not data processors. It's much easier for most people to write an English statement than it is to use symbols. So I decided data processors ought to be able to write their programs in English, and the computers would translate them into machine code. That was the beginning of COBOL, a computer language for data processors. I could say 'Subtract income tax from pay' instead of trying to write that in octal code or using all kinds of symbols. COBOL is the major language used today in data processing."
In 1954 Hopper was named the company's first director of automatic programming, and her department released some of the first compiler-based programming languages, including MATH-MATIC and FLOW-MATIC.
COBOL
In the spring of 1959, computer experts from industry and government were brought together in a two-day conference known as the Conference on Data Systems Languages (CODASYL). Hopper served as a technical consultant to the committee, and many of her former employees served on the short-term committee that defined the new language COBOL (an acronym for COmmon Business-Oriented Language). The new language extended Hopper's FLOW-MATIC language with some ideas from the IBM equivalent, COMTRAN. Hopper's belief that programs should be written in a language that was close to English (rather than in machine code or in languages close to machine code, such as assembly languages) was captured in the new business language, and COBOL went on to be the most ubiquitous business language to date. Among the members of the committee that worked on COBOL was Mount Holyoke College alumna Jean E. Sammet.
From 1967 to 1977, Hopper served as the director of the Navy Programming Languages Group in the Navy's Office of Information Systems Planning and was promoted to the rank of captain in 1973. She developed validation software for COBOL and its compiler as part of a COBOL standardization program for the entire Navy.
Standards
In the 1970s, Hopper advocated for the Defense Department to replace large, centralized systems with networks of small, distributed computers. Any user on any computer node could access common databases located on the network. She developed the implementation of standards for testing computer systems and components, most significantly for early programming languages such as FORTRAN and COBOL. The Navy tests for conformance to these standards led to significant convergence among the programming language dialects of the major computer vendors. In the 1980s, these tests (and their official administration) were assumed by the National Bureau of Standards (NBS), known today as the National Institute of Standards and Technology (NIST).
Retirement
In accordance with Navy attrition regulations, Hopper retired from the Naval Reserve with the rank of commander at age 60 at the end of 1966. She was recalled to active duty in August 1967 for a six-month period that turned into an indefinite assignment. She again retired in 1971 but was again asked to return to active duty in 1972. She was promoted to captain in 1973 by Admiral Elmo R. Zumwalt, Jr.
After Republican Representative Philip Crane saw her on a March 1983 segment of 60 Minutes, he championed , a joint resolution originating in the House of Representatives, which led to her promotion on 15 December 1983 to commodore by special Presidential appointment by President Ronald Reagan. She remained on active duty for several years beyond mandatory retirement by special approval of Congress. Effective November 8, 1985, the rank of commodore was renamed rear admiral (lower half) and Hopper became one of the Navy's few female admirals.
Following a career that spanned more than 42 years, Admiral Hopper took retirement from the Navy on August 14, 1986. At a celebration held in Boston on the to commemorate her retirement, Hopper was awarded the Defense Distinguished Service Medal, the highest non-combat decoration awarded by the Department of Defense.
At the time of her retirement, she was the oldest active-duty commissioned officer in the United States Navy (79 years, eight months and five days), and had her retirement ceremony aboard the oldest commissioned ship in the United States Navy (188 years, nine months and 23 days). Admirals William D. Leahy, Chester W. Nimitz, Hyman G. Rickover and Charles Stewart were the only other officers in the Navy's history to serve on active duty at a higher age. Leahy and Nimitz served on active duty for life due to their promotions to the rank of fleet admiral.
Post-retirement
Following her retirement from the Navy, she was hired as a senior consultant to Digital Equipment Corporation (DEC). Hopper was initially offered a position by Rita Yavinsky, but she insisted on going through the typical formal interview process. She then proposed in jest that she would be willing to accept a position which made her available on alternating Thursdays, exhibited at their museum of computing as a pioneer, in exchange for a generous salary and unlimited expense account. Instead, she was hired as a full-time Principal Corporate Consulting Engineer, a tech-track SVP-equivalent. In this position, Hopper represented the company at industry forums, serving on various industry committees, along with other obligations. She retained that position until her death at age 85 in 1992.
At DEC Hopper served primarily as a goodwill ambassador. She lectured widely about the early days of computing, her career, and on efforts that computer vendors could take to make life easier for their users. She visited most of Digital's engineering facilities, where she generally received a standing ovation at the conclusion of her remarks. Although no longer a serving officer, she always wore her Navy full dress uniform to these lectures contrary to U.S. Department of Defense policy.
"The most important thing I've accomplished, other than building the compiler," she said, "is training young people. They come to me, you know, and say, 'Do you think we can do this?' I say, 'Try it.' And I back 'em up. They need that. I keep track of them as they get older and I stir 'em up at intervals so they don't forget to take chances."
Anecdotes
Throughout much of her later career, Hopper was much in demand as a speaker at various computer-related events. She was well known for her lively and irreverent speaking style, as well as a rich treasury of early war stories. She also received the nickname "Grandma COBOL".
While she was working on a Mark II Computer at Harvard University in 1947, her associates discovered a moth that was stuck in a relay and impeding the operation of the computer. Upon extraction, the insect was affixed to a log sheet for that day with the notation, “First actual case of a bug being found”. While neither she nor her crew members mentioned the exact phrase, "debugging", in their log entries, the case is held as a historical instance of "debugging" a computer and Hopper is credited with popularizing the term in computing. For many decades, the term "bug" for a malfunction had been in use in several fields before being applied to computers. The remains of the moth can be found taped into the group's log book at the Smithsonian Institution's National Museum of American History in Washington, D.C.
Grace Hopper is famous for her nanoseconds visual aid. People (such as generals and admirals) used to ask her why satellite communication took so long. She started handing out pieces of wire that were just under one foot long——the distance that light travels in one nanosecond. She gave these pieces of wire the metonym "nanoseconds." She was careful to tell her audience that the length of her nanoseconds was actually the maximum speed the signals would travel in a vacuum, and that signals would travel more slowly through the actual wires that were her teaching aids. Later she used the same pieces of wire to illustrate why computers had to be small to be fast. At many of her talks and visits, she handed out "nanoseconds" to everyone in the audience, contrasting them with a coil of wire long, representing a microsecond. Later, while giving these lectures while working for DEC, she passed out packets of pepper, calling the individual grains of ground pepper picoseconds.
Jay Elliot described Grace Hopper as appearing to be " 'all Navy', but when you reach inside, you find a 'Pirate' dying to be released."
Death
On New Year's Day 1992, Hopper died in her sleep of natural causes at her home in Arlington, Virginia; she was 85 years of age. She was interred with full military honors in Arlington National Cemetery.
Dates of rank
Awards and honors
Military awards
Other awards
1964: Hopper was awarded the Society of Women Engineers Achievement Award, the Society's highest honor, "In recognition of her significant contributions to the burgeoning computer industry as an engineering manager and originator of automatic programming systems." In May 1955, Hopper was one of the founding members of the Society of Women Engineers.
1969: Hopper was awarded the inaugural Data Processing Management Association Man of the Year award (now called the Distinguished Information Sciences Award).
1971: The annual Grace Murray Hopper Award for Outstanding Young Computer Professionals was established in 1971 by the Association for Computing Machinery.
1973: Elected to the U.S. National Academy of Engineering.
1973: First American and the first woman of any nationality to be made a Distinguished Fellow of the British Computer Society.
1981: Received an Honorary PhD from Clarkson University.
1982: American Association of University Women Achievement Award and an Honorary Doctor of Science from Marquette University.
1983: Golden Plate Award of the American Academy of Achievement.
1985: Honorary Doctor of Letters from Western New England College (now Western New England University).
1986: Received the Defense Distinguished Service Medal at her retirement.
1986: Received an Honorary Doctor of Science from Syracuse University.
1987: She became the first Computer History Museum Fellow Award Recipient "for contributions to the development of programming languages, for standardization efforts, and for lifelong naval service."
1988: Received the Golden Gavel Award, Toastmasters International.
1991: National Medal of Technology.
1991: Elected a Fellow of the American Academy of Arts and Sciences.
1992: The Society of Women Engineers established three annual, renewable, "Admiral Grace Murray Hopper Scholarships"
1994: Inducted into the National Women's Hall of Fame.
1996: was launched. Nicknamed Amazing Grace, it is on a very short list of U.S. military vessels named after women.
2001: Eavan Boland wrote a poem dedicated to Grace Hopper titled "Code" in her 2001 release Against Love Poetry.
2001: The Gracies, the Government Technology Leadership Award were named in her honor.
2009: The Department of Energy's National Energy Research Scientific Computing Center named its flagship system "Hopper".
2009: Office of Naval Intelligence creates the Grace Hopper Information Services Center.
2013: Google made the Google Doodle for Hopper's 107th birthday an animation of her sitting at a computer, using COBOL to print out her age. At the end of the animation, a moth flies out of the computer.
2016: On November 22, 2016, Hopper was posthumously awarded a Presidential Medal of Freedom for her accomplishments in the field of computer science.
2017: Hopper College at Yale University was named in her honor.
2021: The Admiral Grace Hopper Award was established by the chancellor of the College of Information and Cyberspace (CIC) of the National Defense University to recognize leaders in the fields of information and cybersecurity throughout the National Security community.
Legacy
Grace Hopper was awarded 40 honorary degrees from universities worldwide during her lifetime.
Born with Curiosity: The Grace Hopper Story is an upcoming documentary film.
Nvidia is naming an upcoming GPU generation Hopper after Grace Hopper.
The Navy's Hopper Information Services Center is named for her.
The Navy named a guided-missile destroyer Hopper after her.
Places
Grace Hopper Avenue in Monterey, California, is the location of the Navy's Fleet Numerical Meteorology and Oceanography Center as well as the National Weather Service's San Francisco Bay Area forecast office.
Grace M. Hopper Navy Regional Data Automation Center at Naval Air Station, North Island, California.
Grace Murray Hopper Park, located on South Joyce Street in Arlington, Virginia, is a small memorial park in front of her former residence (River House Apartments) and is now owned by Arlington County, Virginia.
Brewster Academy, a school located in Wolfeboro, New Hampshire, United States, dedicated their computer lab to her in 1985, calling it the Grace Murray Hopper Center for Computer Learning. The academy bestows a Grace Murray Hopper Prize to a graduate who excelled in the field of computer systems. Hopper had spent her childhood summers at a family home in Wolfeboro.
Grace Hopper College, one of the residential colleges of Yale University.
An administration building on Naval Support Activity Annapolis (previously known as Naval Station Annapolis) in Annapolis, Maryland is named the Grace Hopper Building in her honor.
Vice Admiral Walter E. "Ted" Carter announced on September 8, 2016 at the Athena Conference that the Naval Academy's newest Cyber Operations building would be named Hopper Hall after Admiral Grace Hopper. This is the first building at any service academy named after a woman. In his words, Grace Hopper was "the admiral of the cyber seas."
The US Naval Academy also owns a Cray XC-30 supercomputer named "Grace," hosted at the University of Maryland-College Park.
Building 1482 aboard Naval Air Station North Island, housing the Naval Computer and Telecommunication Station San Diego, is named the Grace Hopper Building, and also contains the History of Naval Communications Museum.
Building 6007, C2/CNT West in Aberdeen Proving Ground, Maryland, is named after her.
The street outside of the Nathan Deal Georgia Cyber Innovation and Training Center in Augusta, Georgia, is named Grace Hopper Lane.
Grace Hopper Academy is a for-profit immersive programming school in New York City named in Grace Hopper's honor. It opened in January 2016 with the goal of increasing the proportion of women in software engineering careers.
A bridge over Goose Creek, to join the north and south sides of the Naval Support Activity Charleston side of Joint Base Charleston, South Carolina, is named the Grace Hopper Memorial Bridge in her honor.
Minor planet 5773 Hopper discovered by Eleanor Helin is named in her honor. The official naming citation was published by the Minor Planet Center on 8 November 2019 ().
Grace Hopper Hall, a community meeting hall in Orlando, Florida (located on the site of the former Orlando Naval Training Center) is named for her.
Programs
Women at Microsoft Corporation formed an employee group called Hoppers and established a scholarship in her honor.
Beginning in 2015, one of the nine competition fields at the FIRST Robotics Competition world championship is named for Hopper.
A named professorship in the Department of Computer Sciences was established at Yale University in her honor. Joan Feigenbaum was named to this chair in 2008.
In 2020, Google named its new undersea network cable 'Grace Hopper'. The cable will connect the US, UK and Spain and is estimated to be completed by 2022.
In popular culture
In his comic book series, Secret Coders by Gene Luen Yang, the main character is named Hopper Gracie-Hu.
Since 2013, Hopper's official portrait has been included in the matplotlib python library as sample data to replace the controversial Lenna image.
Grace Hopper Celebration of Women in Computing
Her legacy was an inspiring factor in the creation of the Grace Hopper Celebration of Women in Computing. Held yearly, this conference is designed to bring the research and career interests of women in computing to the forefront.
See also
Code: Debugging the Gender Gap
Grace Hopper Celebration of Women in Computing
List of pioneers in computer science
Systems engineering
Women in computing
Women in the United States Navy
List of female United States military generals and flag officers
Timeline of women in science
Notes
Obituary notices
Betts, Mitch (Computerworld 26: 14, 1992)
Bromberg, Howard (IEEE Software 9: 103–104, 1992)
Danca, Richard A. (Federal Computer Week 6: 26–27, 1992)
Hancock, Bill (Digital Review 9: 40, 1992)
Power, Kevin (Government Computer News 11: 70, 1992)
Sammet, J. E. (Communications of the ACM 35 (4): 128–131, 1992)
Weiss, Eric A. (IEEE Annals of the History of Computing 14: 56–58, 1992)
References
Further reading
Williams' book focuses on the lives and contributions of four notable women scientists: Mary Sears (1905–1997); Florence van Straten (1913–1992); Grace Murray Hopper (1906–1992); Mina Spiegel Rees (1902–1997).
External links
Oral History of Captain Grace Hopper – Interviewed by: Angeline Pantages 1980, Naval Data Automation Command, Maryland.
from Chips, the United States Navy information technology magazine.
Grace Hopper: Navy to the Core, a Pirate at Heart (2014), To learn more about Hopper's story and Navy legacy navy.mil.
The Queen of Code (2015), a documentary film about Grace Hopper produced by FiveThirtyEight.
Norwood, Arlisha. "Grace Hopper". National Women's History Museum. 2017.
1906 births
1992 deaths
American computer programmers
American computer scientists
COBOL
Programming language designers
American women computer scientists
Women inventors
American women mathematicians
United States Navy rear admirals (lower half)
Female admirals of the United States Navy
Fellows of the American Academy of Arts and Sciences
Fellows of the British Computer Society
National Medal of Technology recipients
Recipients of the Defense Distinguished Service Medal
Recipients of the Legion of Merit
Recipients of the Meritorious Service Medal (United States)
Harvard University people
Vassar College faculty
Military personnel from New York City
Vassar College alumni
Yale University alumni
American people of Dutch descent
American people of Scottish descent
Burials at Arlington National Cemetery
20th-century American engineers
20th-century American mathematicians
20th-century American scientists
20th-century American women scientists
Presidential Medal of Freedom recipients
Computer science educators
American software engineers
20th-century women mathematicians
Mathematicians from New York (state)
Wardlaw-Hartridge School alumni
WAVES personnel |
103127 | https://en.wikipedia.org/wiki/Brute-force%20search | Brute-force search | In computer science, brute-force search or exhaustive search, also known as generate and test, is a very general problem-solving technique and algorithmic paradigm that consists of systematically enumerating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement.
A brute-force algorithm that finds the divisors of a natural number n would enumerate all integers from 1 to n, and check whether each of them divides n without remainder. A brute-force approach for the eight queens puzzle would examine all possible arrangements of 8 pieces on the 64-square chessboard and for each arrangement, check whether each (queen) piece can attack any other.
While a brute-force search is simple to implement and will always find a solution if it exists, implementation costs are proportional to the number of candidate solutionswhich in many practical problems tends to grow very quickly as the size of the problem increases (§Combinatorial explosion). Therefore, brute-force search is typically used when the problem size is limited, or when there are problem-specific heuristics that can be used to reduce the set of candidate solutions to a manageable size. The method is also used when the simplicity of implementation is more important than speed.
This is the case, for example, in critical applications where any errors in the algorithm would have very serious consequences or when using a computer to prove a mathematical theorem. Brute-force search is also useful as a baseline method when benchmarking other algorithms or metaheuristics. Indeed, brute-force search can be viewed as the simplest metaheuristic. Brute force search should not be confused with backtracking, where large sets of solutions can be discarded without being explicitly enumerated (as in the textbook computer solution to the eight queens problem above). The brute-force method for finding an item in a tablenamely, check all entries of the latter, sequentiallyis called linear search.
Implementing the brute-force search
Basic algorithm
In order candidate for P after the current one c.
valid (P, c): check whether candidate c is a solution for P.
output (P, c): use the solution c of P as appropriate to the application.
The next procedure must also tell when there are no more candidates for the instance P, after the current one c. A convenient way to do that is to return a "null candidate", some conventional data value Λ that is distinct from any real candidate. Likewise the first procedure should return Λ if there are no candidates at all for the instance P. The brute-force method is then expressed by the algorithm
c ← first(P)
while c ≠ Λ do
if valid(P,c) then
output(P, c)
c ← next(P, c)
end while
For example, when looking for the divisors of an integer n, the instance data P is the number n. The call first(n) should return the integer 1 if n ≥ 1, or Λ otherwise; the call next(n,c) should return c + 1 if c < n, and Λ otherwise; and valid(n,c) should return true if and only if c is a divisor of n. (In fact, if we choose Λ to be n + 1, the tests n ≥ 1 and c < n are unnecessary.)The brute-force search algorithm above will call output for every candidate that is a solution to the given instance P. The algorithm is easily modified to stop after finding the first solution, or a specified number of solutions; or after testing a specified number of candidates, or after spending a given amount of CPU time.
Combinatorial explosion
The main disadvantage of the brute-force method is that, for many real-world problems, the number of natural candidates is prohibitively large. For instance, if we look for the divisors of a number as described above, the number of candidates tested will be the given number n. So if n has sixteen decimal digits, say, the search will require executing at least 1015 computer instructions, which will take several days on a typical PC. If n is a random 64-bit natural number, which has about 19 decimal digits on the average, the search will take about 10 years. This steep growth in the number of candidates, as the size of the data increases, occurs in all sorts of problems. For instance, if we are seeking a particular rearrangement of 10 letters, then we have 10! = 3,628,800 candidates to consider, which a typical PC can generate and test in less than one second. However, adding one more letterwhich is only a 10% increase in the data sizewill multiply the number of candidates by 11, a 1000% increase. For 20 letters, the number of candidates is 20!, which is about 2.4×1018 or 2.4 quintillion; and the search will take about 10 years. This unwelcome phenomenon is commonly called the combinatorial explosion, or the curse of dimensionality.
One example of a case where combinatorial complexity leads to solvability limit is in solving chess. Chess is not a solved game. In 2005, all chess game endings with six pieces or less were solved, showing the result of each position if played perfectly. It took ten more years to complete the tablebase with one more chess piece added, thus completing a 7-piece tablebase. Adding one more piece to a chess ending (thus making an 8-piece tablebase) is considered intractable due to the added combinatorial complexity.
Speeding up brute-force searches
One way to speed up a brute-force algorithm is to reduce the search space, that is, the set of candidate solutions, by using heuristics specific to the problem class. For example, in the eight queens problem the challenge is to place eight queens on a standard chessboard so that no queen attacks any other. Since each queen can be placed in any of the 64 squares, in principle there are 648 = 281,474,976,710,656 possibilities to consider. However, because the queens are all alike, and that no two queens can be placed on the same square, the candidates are all possible ways of choosing of a set of 8 squares from the set all 64 squares; which means 64 choose 8 = 64!/(56!*8!) = 4,426,165,368 candidate solutionsabout 1/60,000 of the previous estimate. Further, no arrangement with two queens on the same row or the same column can be a solution. Therefore, we can further restrict the set of candidates to those arrangements.
As this example shows, a little bit of analysis will often lead to dramatic reductions in the number of candidate solutions, and may turn an intractable problem into a trivial one.
In some cases, the analysis may reduce the candidates to the set of all valid solutions; that is, it may yield an algorithm that directly enumerates all the desired solutions (or finds one solution, as appropriate), without wasting time with tests and the generation of invalid candidates. For example, for the problem "find all integers between 1 and 1,000,000 that are evenly divisible by 417" a naive brute-force solution would generate all integers in the range, testing each of them for divisibility. However, that problem can be solved much more efficiently by starting with 417 and repeatedly adding 417 until the number exceeds 1,000,000which takes only 2398 (= 1,000,000 ÷ 417) steps, and no tests.
Reordering the search space
In applications that require only one solution, rather than all solutions, the expected running time of a brute force search will often depend on the order in which the candidates are tested. As a general rule, one should test the most promising candidates first. For example, when searching for a proper divisor of a random number n, it is better to enumerate the candidate divisors in increasing order, from 2 to , than the other way aroundbecause the probability that n is divisible by c is 1/c. Moreover, the probability of a candidate being valid is often affected by the previous failed trials. For example, consider the problem of finding a 1 bit in a given 1000-bit string P. In this case, the candidate solutions are the indices 1 to 1000, and a candidate c is valid if P[c] = 1. Now, suppose that the first bit of P is equally likely to be 0 or 1, but each bit thereafter is equal to the previous one with 90% probability. If the candidates are enumerated in increasing order, 1 to 1000, the number t of candidates examined before success will be about 6, on the average. On the other hand, if the candidates are enumerated in the order 1,11,21,31...991,2,12,22,32 etc., the expected value of t will be only a little more than 2.More generally, the search space should be enumerated in such a way that the next candidate is most likely to be valid, given that the previous trials were not. So if the valid solutions are likely to be "clustered" in some sense, then each new candidate should be as far as possible from the previous ones, in that same sense. The converse holds, of course, if the solutions are likely to be spread out more uniformly than expected by chance.
Alternatives to brute-force search
There are many other search methods, or metaheuristics, which are designed to take advantage of various kinds of partial knowledge one may have about the solution. Heuristics can also be used to make an early cutoff of parts of the search. One example of this is the minimax principle for searching game trees, that eliminates many subtrees at an early stage in the search. In certain fields, such as language parsing, techniques such as chart parsing can exploit constraints in the problem to reduce an exponential complexity problem into a polynomial complexity problem. In many cases, such as in Constraint Satisfaction Problems, one can dramatically reduce the search space by means of Constraint propagation, that is efficiently implemented in Constraint programming languages. The search space for problems can also be reduced by replacing the full problem with a simplified version. For example, in computer chess, rather than computing the full minimax tree of all possible moves for the remainder of the game, a more limited tree of minimax possibilities is computed, with the tree being pruned at a certain number of moves, and the remainder of the tree being approximated by a static evaluation function.
In cryptography
In cryptography, a brute-force attack involves systematically checking all possible keys until the correct key is found. This strategy can in theory be used against any encrypted data (except a one-time pad) by an attacker who is unable to take advantage of any weakness in an encryption system that would otherwise make his or her task easier.
The key length used in the encryption determines the practical feasibility of performing a brute force attack, with longer keys exponentially more difficult to crack than shorter ones. Brute force attacks can be made less effective by obfuscating the data to be encoded, something that makes it more difficult for an attacker to recognise when he has cracked the code. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to mount a successful brute force attack against it.
References
See also
A brute-force algorithm to solve Sudoku puzzles.
Brute-force attack
Big O notation
Search algorithms |
18620293 | https://en.wikipedia.org/wiki/Cherrypal | Cherrypal | Cherrypal is a California-based marketer of Chinese-manufactured consumer-oriented computers. It markets a range of models with a diversity of CPU-types, structures, features, and operating systems. Commentators have observed that Cherrypal arguably beat the heralded and much-better financed one Laptop per Child (OLPC) project to its goal of a $100 "laptop" (such units are physically small: a Cherrypal unit for general purchase at $99 plus shipping has a 7" screen, an OLPC provided to a child in developing world at $199 has a 7.5").
The company's business practices have generated controversy and antipathy from some vocally dissatisfied customers, while others are marginally satisfied. Its practices pertaining to merchandise returns and communication have been repeatedly faulted. The U.S. Better Business Bureau rating for Cherrypal is an "F", indicating that the BBB strongly questions the company’s reliability.
Cherrypal claims a commitment to environmental concerns and the needs of impoverished countries and in particular key sponsorship of a learning center in Ghana. It supports a "One Laptop per Teacher" pilot program in Nigeria.
In order, the company has marketed the "C114" PPC-processor-based nettop, the "Cherrypal Bing" x86-based netbook, the "Cherrypal Africa" XBurst-CPU-based netbook, the "Cherrypal Asia" ARM-processor-based netbooks, and the "CherryPad America" ARM-processor-based tablet computer.
C114
The Cherrypal C114 is a small, light nettop computer using a PowerPC-processor, the Freescale 5121e system-on-a-chip (SoC) integrated main-board, and Xubuntu as its operating system. The device launches Firefox Minefield web-browser, AbiWord word-processor, and other apps via icon double-click. An article in The Register noted that Cherrypal's producers asserted that the computer will consume only 2 watts of power. Independent, informal testing has shown a wattage consumption of still low 6.9 watts while booting.
The CherryPal C114 was a rebadged version of the LimePC D1 mini-desktop computer developed as part of a broader Freescale PowerPC chip-based product line by THTF's Shenzhen R&D center and shown to the public at the 2008 CES in Las Vegas in January 2008.
Bing
The "Cherrypal Bing" is a slim x86-based netbook that ships with Windows XP.
Africa
Cherrypal's $99 netbook, the Africa, is aimed primarily at the developing world but also available for sale to consumers. According to a blog post by Max Seybold, the device's specs in Cherrypal's web store are kept intentionally vague, because the Africa is not built to a set design. Instead, Cherrypal either purchases pre-made netbook systems or buys odd lots of whatever inexpensive components are available and builds netbooks out of these. It then rebrands these netbooks as Africas. The $99 computer was named Africa in honor of PAAJAF, a humanitarian services group based in Ghana, West-Africa.
Seybold states that the resulting device will at a minimum meet the specs listed on the website, but could also exceed them. It could also end up having an ARM, MIPS, or x86-based CPU architecture depending on what chips are available.
In an interview, Seybold stated that the Africa is not meant to be sold as a "computer" in the traditional sense, but as an "appliance" to provide Internet access to people who could not afford to buy a traditional computer. He said that with the number of government services (such as unemployment or disability) that are encouraging access by Internet, lack of such access is becoming more and more of a disability. The only thing Cherrypal promises for $99 is the ability to access the Internet.
Asia
The "Cherrypal Asia" is a low cost ARM-based netbook that uses Android OS version 1.7.
America
The "Cherrypal America" also known as Cherrypad is an Android tablet based upon Telechips TCC89xx ARM11 processors. Cherrypal initially sold the tablet with the promise of an upgrade to Android 2.2 by November and support for Android market. The market support has been officially removed because the tablet does not conform to the market requirements by Google. Also the Android 2.2 upgrade has been canceled, instead Cherrypal now promises an update to Android 2.3.
The hardware of the Cherrypal America as listed by Cherrypal comprises an 800 MHz ARM11 CPU by Telechips, 256 MB DDR2 RAM, 2 GB Flash Memory and an 800x480 resistive touchscreen. However, there is a user report arguing some less powerful specifications according to the boot-log dumped via dmesg.
According to various Android news magazines Cherrypal has announced a successor for their current Cherrypad.
History
Cherrypal was founded by Max Seybold based in Palo Alto. The C114 (and following C120) desktop computers were originally developed by Tsinghua Tongfang (THTF) in its Shenzhen R&D center by an engineering team led by American electronics industry veterans Jack Campbell and Ryan Quinn. An extended line of handheld, desktop, and TV-based PCs using the Freescale MPC5121e PowerPC microprocessor was shown at CES 2008 by THTF, with the desktop product picked up thereafter as an OEM purchase by CherryPal.
Cloud computing plans
Cherrypal's marketers planned to use Firefox not only as its web-browser but also as its user interface to launch other applications such as OpenOffice.org. They planned that the Cherrypal would make use of cloud computing in which applications and storage would be wholly or in part Internet-based. These plans have not yet been implemented. The company's president asserted the cloud (Green Maraschino) would be launched in February 2010, however it is not known to have occurred.
Timeline
Jan. 2008: C114 model originally shown at CES by its manufacturer THTF.
Jul. 2008: Cherrypal was originally scheduled to ship in late July, 2008.
4 Nov. 2008: Rescheduled the ship date for 4 November 2008.
3 Dec. 2008: The first end-user report of actually receiving a boxed Cherrypal was posted. Cherrypal stated they had earlier shipped some multiple-unit orders to organizational customers. More users began receiving their Cherrypals, and real-life test reports were released, with mixed responses.
20 Jun. 2009: A competitor's blog claimed Cherrypal went out of business in the UK in June 2009 and that "We are not quite sure what has happened with Cherrypal Inc. in the USA."
Dec. 2009: An upgraded version of the Cherrypal is offered for sale on the company website. Also released were an update for its "Bing" notebook, and a $99 mini-notebook called "Africa."
6 Jan. 2010: A Teleread.org editor claimed caution was needed before dealing with Cherrypal. In response to that, and comments to the article from persons accusing Cherrypal of engaging in a scam, Max Seybold sent a "Cease and Desist" email to the website. The site owners then decided to delete all the Cherrypal articles and comments.
11 Jan. 2010: A German blogger stated online-tracking showed a shipment of his unit on its way, but that this has never arrived because of a wrong tracking-number sent by Cherrypal. He later stated that he received a unit with specifications less than advertised, and OS other than advertised.
18 Jan. 2010: A Mobileread.com editor authored an article accusing Cherrypal of "lies, ignored emails, and technical incompetence," which Mobileread.com ran. The website formerly ran another attack article, "Cherrypal is a SCAM."
20 Mar. 2010: An end-user reported that he received a $99 CherryPal Africa Linux version, providing pictures. "As some of the negative internet posters seemed more strident than the situation demanded, I decided to take a chance," his blog reads. A week later another end-user reported Cherrypal Africa receipt, criticizing the shipping delay and software features, though saying "it works."
11 May 2010: News forums reported the launch of "Cherrypal Asia," a netbook with ARM processor, Android OS.
30 Jun, 2010: A user comment at an Android forum indicates receipt of an Asia.
References
External links
Cherrypal corporate website
"Cherrypal Unveils Low-Cost 'Cloud Computer'," Information Week, July 21, 2008
"Cherrypal Mini-desktop Consumes 2 Watts of Power," IDG News Service/PC World Magazine, July 21, 2008
"Will Cherrypal be the first mass-market cloud computer?", Venture Beat, July 21, 2008
“Cloud Enabled,” Linux Magazine, July 23, 2008
“Linux mini-PC takes two Watts to tango” Desktop Linux/eWeek, July 22, 2008
“250 Freescale-Based ‘Green’ ‘Cloud’ Computer,” SlashDot, July 22, 2008
“Cherrypal out sweetens Apple with 2W, ultra-cheap PC,” The Register, June 17, 2008
“Waiting on a CerryPal? Don't Hold Your Breath” TG Daily, December 4, 2008
Cloud clients
Linux-based devices
Nettop |
15062 | https://en.wikipedia.org/wiki/Intel%208080 | Intel 8080 | The Intel 8080 ("eighty-eighty") is the second 8-bit microprocessor designed and manufactured by Intel. It first appeared in April 1974 and is an extended and enhanced variant of the earlier 8008 design, although without binary compatibility. The initial specified clock rate or frequency limit was 2 MHz, with common instructions using 4, 5, 7, 10, or 11 cycles. As a result, the processor is able to execute several hundred thousand instructions per second. Two faster variants, the 8080A-1 (sometimes referred to as the 8080B) and 8080A-2, became available later with clock frequency limits of 3.125 MHz and 2.63 MHz respectively.
The 8080 needs two support chips to function in most applications: the i8224 clock generator/driver and the i8228 bus controller. It is implemented in N-type metal-oxide-semiconductor logic (NMOS) using non-saturated enhancement mode transistors as loads thus demanding a +12 V and a −5 V voltage in addition to the main transistor–transistor logic (TTL) compatible +5 V.
Although earlier microprocessors were commonly used in mass-produced devices such as calculators, cash registers, computer terminals, industrial robots, and other applications, the 8080 saw greater success in a wider set of applications, and is largely credited with starting the microcomputer industry. Several factors contributed to its popularity: its 40-pin package made it easier to interface than the 18-pin 8008, and also made its data bus more efficient; its NMOS implementation gave it faster transistors than those of the P-type metal-oxide-semiconductor logic (PMOS) 8008, while also simplifying interfacing by making it TTL-compatible; a wider variety of support chips were available; its instruction set was enhanced over the 8008; and its full 16-bit address bus (versus the 14-bit one of the 8008) enabled it to access 64 KB of memory, four times more than the 8008's range of 16 KB. It was used in the Altair 8800 and subsequent S-100 bus personal computers until it was replaced by the Z80 in this role, and was the original target CPU for CP/M operating systems developed by Gary Kildall.
The 8080 directly influenced the later x86 architecture. Intel designed the 8086 to have its assembly language be similar enough to the 8080, with most instructions mapping directly onto each other, that transpiled 8080 assembly code could be executed on the 8086.
History
Microprocessor customers were reluctant to adopt the 8008 because of limitations such as the single addressing mode, low clock speed, low pin count, and small on-chip stack, which restricted the scale and complexity of software. There were several proposed designs for the 8080, ranging from simply adding stack instructions to the 8008 to a complete departure from all previous Intel architectures. The final design was a compromise between the proposals.
Federico Faggin, the originator of the 8080 architecture in early 1972, proposed the chip to Intel's management and pushed for its implementation. He finally got the permission to develop it six months later. Faggin hired Masatoshi Shima, who helped design the 4004 with him, from Japan in November 1972. Shima did the detailed design under Faggin's direction, using the design methodology for random logic with silicon gate that Faggin had created for the 4000 family.
The 8080 was explicitly designed to be a general-purpose microprocessor for a larger number of customers. Much of the development effort was spent trying to integrate the functionalities of the 8008's supplemental chips into one package. It was decided early in development that the 8080 was not to be binary-compatible with the 8008, instead opting for source compatibility once run through a transpiler, to allow new software to not be subject to the same restrictions as the 8008. For the same reason, as well as the expand the capabilities of stack-based routines and interrupts, the stack was moved to external memory.
Noting the specialized use of general-purpose registers by programmers in mainframe systems, Stanley Mazor, the chip architect, decided the 8080's registers would be specialized, with register pairs having a different set of uses. This also allowed the engineers to more effectively use transistors for other purposes.
Shima finished the layout in August 1973. After the regulation of NMOS fabrication, a prototype of the 8080 was completed in January 1974. It had a flaw, in that driving with standard TTL devices increased the ground voltage because high current flowed into the narrow line. Intel had already produced 40,000 units of the 8080 at the direction of the sales section before Shima characterized the prototype. It was released as requiring Low-power Schottky TTL (LS TTL) devices. The 8080A fixed this flaw.
Intel offered an instruction set simulator for the 8080 named INTERP/80 to run compiled PL/M programs. It was written by Gary Kildall while he worked as a consultant for Intel.
Description
Programming model
The Intel 8080 is the successor to the 8008. It uses the same basic instruction set and register model as the 8008, although it is neither source code compatible nor binary code compatible with its predecessor. Every instruction in the 8008 has an equivalent instruction in the 8080. The 8080 also adds 16-bit operations in its instruction set. Whereas the 8008 required the use of the HL register pair to indirectly access its 14-bit memory space, the 8080 added addressing modes to allow direct access to its full 16-bit memory space. The internal 7-level push-down call stack of the 8008 was replaced by a dedicated 16-bit stack-pointer (SP) register. The 8080's 40-pin DIP packaging permits it to provide a 16-bit address bus and an 8-bit data bus, enabling access to 64 KiB (216 bytes) of memory.
Registers
The processor has seven 8-bit registers (A, B, C, D, E, H, and L), where A is the primary 8-bit accumulator. The other six registers can be used as either individual 8-bit registers or in three 16-bit register pairs (BC, DE, and HL, referred to as B, D and H in Intel documents) depending on the particular instruction. Some instructions also enable the HL register pair to be used as a (limited) 16-bit accumulator. A pseudo-register M, which refers to the dereferenced memory location pointed to by HL, can be used almost anywhere other registers can be used. The 8080 has a 16-bit stack pointer to memory, replacing the 8008's internal stack, and a 16-bit program counter.
Flags
The processor maintains internal flag bits (a status register), which indicate the results of arithmetic and logical instructions. Only certain instructions affect the flags. The flags are:
Sign (S), set if the result is negative.
Zero (Z), set if the result is zero.
Parity (P), set if the number of 1 bits in the result is even.
Carry (C), set if the last addition operation resulted in a carry or if the last subtraction operation required a borrow
Auxiliary carry (AC or H), used for binary-coded decimal arithmetic (BCD).
The carry bit can be set or complemented by specific instructions. Conditional-branch instructions test the various flag status bits. The flags can be copied as a group to the accumulator. The A accumulator and the flags together are called the PSW register, or program status word.
Commands, instructions
As with many other 8-bit processors, all instructions are encoded in one byte (including register numbers, but excluding immediate data), for simplicity. Some can be followed by one or two bytes of data, which can be an immediate operand, a memory address, or a port number. Like more advanced processors, it has automatic CALL and RET instructions for multi-level procedure calls and returns (which can even be conditionally executed, like jumps) and instructions to save and restore any 16-bit register pair on the machine stack. Eight one-byte call instructions () for subroutines exist at the fixed addresses 00h, 08h, 10h, ..., 38h. These are intended to be supplied by external hardware in order to invoke a corresponding interrupt service routine, but are also often employed as fast system calls. The instruction that executes slowest is , which is used for exchanging the register pair HL with the value stored at the address indicated by the stack pointer.
8-bit instructions
All 8-bit operations with two operands can only be performed on the 8-bit accumulator (the A register). The other operand can be either an immediate value, another 8-bit register, or a memory byte addressed by the 16-bit register pair HL. Increments and decrements can be performed on any 8 bit register or an HL-addressed memory byte. Direct copying is supported between any two 8-bit registers and between any 8-bit register and an HL-addressed memory byte. Due to the regular encoding of the instruction (using a quarter of available opcode space), there are redundant codes to copy a register into itself (, for instance), which are of little use, except for delays. However, the systematic opcode for is instead used to encode the halt () instruction, halting execution until an external reset or interrupt occurs.
16-bit operations
Although the 8080 is generally an 8-bit processor, it has limited abilities to perform 16-bit operations. Any of the three 16-bit register pairs (BC, DE, or HL, referred to as B, D, H in Intel documents) or SP can be loaded with an immediate 16-bit value (using ), incremented or decremented (using and ), or added to HL (using ). By adding HL to itself, it is possible to achieve the same result as a 16-bit arithmetical left shift with one instruction. The only 16-bit instructions that affect any flag is , which sets the CY (carry) flag in order to allow for programmed 24-bit or 32-bit arithmetic (or larger), needed to implement floating-point arithmetic. A stack frame can be allocated using and . A branch to a computed pointer can be executed with . loads HL from directly addressed memory and stores HL likewise. The instruction exchanges the values of the HL and DE register pairs. exchanges last item pushed on stack with HL.
Input/output scheme
Input output port space
The 8080 supports up to 256 input/output (I/O) ports, accessed via dedicated I/O instructions taking port addresses as operands. This I/O mapping scheme is regarded as an advantage, as it frees up the processor's limited address space. Many CPU architectures instead use so-called memory-mapped I/O (MMIO), in which a common address space is used for both RAM and peripheral chips. This removes the need for dedicated I/O instructions, although a drawback in such designs may be that special hardware must be used to insert wait states, as peripherals are often slower than memory. However, in some simple 8080 computers, I/O is indeed addressed as if they were memory cells, "memory-mapped", leaving the I/O commands unused. I/O addressing can also sometimes employ the fact that the processor outputs the same 8-bit port address to both the lower and the higher address byte (i.e., would put the address 0505h on the 16-bit address bus). Similar I/O-port schemes are used in the backward-compatible Zilog Z80 and Intel 8085, and the closely related x86 microprocessor families.
Separate stack space
One of the bits in the processor state word (see below) indicates that the processor is accessing data from the stack. Using this signal, it is possible to implement a separate stack memory space. This feature is seldom used.
The internal state word
For more advanced systems, during one phase of its working loop, the processor set its "internal state byte" on the data bus. This byte contains flags that determine whether the memory or I/O port is accessed and whether it is necessary to handle an interrupt.
The interrupt system state (enabled or disabled) is also output on a separate pin. For simple systems, where the interrupts are not used, it is possible to find cases where this pin is used as an additional single-bit output port (the popular Radio-86RK computer made in the Soviet Union, for instance).
Example code
The following 8080/8085 assembler source code is for a subroutine named that copies a block of data bytes of a given size from one location to another. The data block is copied one byte at a time, and the data movement and looping logic utilizes 16-bit operations.
Pin use
The address bus has its own 16 pins, and the data bus has 8 pins that are usable without any multiplexing. Using the two additional pins (read and write signals), it is possible to assemble simple microprocessor devices very easily. Only the separate IO space, interrupts, and DMA need added chips to decode the processor pin signals. However, the pin load capacity is limited; even simple computers often require bus amplifiers.
The processor needs three power sources (−5, +5, and +12 V) and two non-overlapping high-amplitude synchronizing signals. However, at least the late Soviet version КР580ВМ80А was able to work with a single +5 V power source, the +12 V pin being connected to +5 V and the −5 V pin to ground.
The pin-out table, from the chip's accompanying documentation, describes the pins as follows:
Support chips
A key factor in the success of the 8080 was the broad range of support chips available, providing serial communications, counter/timing, input/output, direct memory access, and programmable interrupt control amongst other functions:
8238 – System controller and bus driver
8251 – Communication controller
8253 – Programmable interval timer
8255 – Programmable peripheral interface
8257 – DMA controller
8259 – Programmable interrupt controller
Physical implementation
The 8080 integrated circuit uses non-saturated enhancement-load nMOS gates, demanding extra voltages (for the load-gate bias). It was manufactured in a silicon gate process using a minimal feature size of 6 µm. A single layer of metal is used to interconnect the approximately 4,500 transistors in the design, but the higher resistance polysilicon layer, which required higher voltage for some interconnects, is implemented with transistor gates. The die size is approximately 20 mm2.
The industrial impact
Applications and successors
The 8080 is used in many early microcomputers, such as the MITS Altair 8800 Computer, Processor Technology SOL-20 Terminal Computer and IMSAI 8080 Microcomputer, forming the basis for machines running the CP/M operating system (the later, almost fully compatible and more able, Zilog Z80 processor would capitalize on this, with Z80 & CP/M becoming the dominant CPU and OS combination of the period circa 1976 to 1983 much as did the x86 & DOS for the PC a decade later).
In 1979, even after the introduction of the Z80 and 8085 processors, five manufacturers of the 8080 were selling an estimated 500,000 units per month at a price around $3 to $4 each.
The first single-board microcomputers, such as MYCRO-1 and the dyna-micro / MMD-1 (see: Single-board computer) were based on the Intel 8080. One of the early uses of the 8080 was made in the late 1970s by Cubic-Western Data of San Diego, CA in its Automated Fare Collection Systems custom designed for mass transit systems around the world. An early industrial use of the 8080 is as the "brain" of the DatagraphiX Auto-COM (Computer Output Microfiche) line of products which takes large amounts of user data from reel-to-reel tape and images it onto microfiche. The Auto-COM instruments also include an entire automated film cutting, processing, washing, and drying sub-system. Several early video arcade games were built around the 8080 microprocessor, including Space Invaders, one of the most popular arcade games ever made.
Zilog introduced the Z80, which has a compatible machine language instruction set and initially used the same assembly language as the 8080, but for legal reasons, Zilog developed a syntactically-different (but code compatible) alternative assembly language for the Z80. At Intel, the 8080 was followed by the compatible and electrically more elegant 8085.
Later, Intel issued the assembly-language compatible (but not binary-compatible) 16-bit 8086 and then the 8/16-bit 8088, which was selected by IBM for its new PC to be launched in 1981. Later NEC made the NEC V20 (an 8088 clone with Intel 80186 instruction set compatibility) which also supports an 8080 emulation mode. This is also supported by NEC's V30 (a similarly enhanced 8086 clone). Thus, the 8080, via its instruction set architecture (ISA), made a lasting impact on computer history.
A number of processors compatible with the Intel 8080A were manufactured in the Eastern Bloc: the KR580VM80A (initially marked as KP580ИK80) in the Soviet Union, the MCY7880 made by Unitra CEMI in Poland, the MHB8080A made by TESLA in Czechoslovakia, the 8080APC made by Tungsram / MEV in Hungary, and the MMN8080 made by Microelectronica Bucharest in Romania.
, the 8080 is still in production at Lansdale Semiconductors.
Industry change
The 8080 also changed how computers were created. When the 8080 was introduced, computer systems were usually created by computer manufacturers such as Digital Equipment Corporation, Hewlett Packard, or IBM. A manufacturer would produce the whole computer, including processor, terminals, and system software such as compilers and operating system. The 8080 was designed for almost any application except a complete computer system. Hewlett Packard developed the HP 2640 series of smart terminals around the 8080. The HP 2647 is a terminal which runs the programming language BASIC on the 8080. Microsoft's founding product, Microsoft BASIC, was originally programmed for the 8080.
The 8080 and 8085 gave rise to the 8086, which was designed as a source code compatible, albeit not binary compatible, extension of the 8080. This design, in turn, later spawned the x86 family of chips, which continue to be Intel's primary line of processors. Many of the 8080's core machine instructions and concepts survive in the widespread x86 platform. Examples include the registers named A, B, C, and D and many of the flags used to control conditional jumps. 8080 assembly code can still be directly translated into x86 instructions, for all of its core elements are still present.
Cultural impact
Asteroid 8080 Intel is named as a pun and praise on the name of Intel 8080.
Microsoft's published phone number, 425-882-8080, was chosen because much early work was on this chip.
Many of Intel's main phone numbers also take a similar form: xxx-xxx-8080
See also
CP/M – operating system
S-100 bus
MPT8080
References
Further reading
; 495 pages
; 332 pages
; 466 pages
; 180 pages
External links
Intel and other manufacturers' 8080 CPU images and descriptions at cpu-collection.de
Scan of the Intel 8080 data book at DataSheetArchive.com
Microcomputer Design, Second Edition, 1976
8080 Emulator written in JavaScript
Intel 8080/KR580VM80A emulator in JavaScript
Intel 8080 Microcomputer Systems User's Manual (September 1975, 262 pages)
Intel 8080 Microcomputer Systems User's Manual (September 1975, 234 pages)
Intel 8080/8085 Instruction Reference Card
Computer-related introductions in 1974
8-bit microprocessors |
9611 | https://en.wikipedia.org/wiki/E-commerce | E-commerce | E-commerce (electronic commerce) is the activity of electronically buying or selling of products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is in turn driven by the technological advances of the semiconductor industry, and is the largest sector of the electronics industry.
E-commerce typically uses the web for at least a part of a transaction's life cycle although it may also use other technologies such as e-mail. Typical e-commerce transactions include the purchase of products (such as books from Amazon) or services (such as music downloads in the form of digital distribution such as iTunes Store). There are three areas of e-commerce: online retailing, electronic markets, and online auctions. E-commerce is supported by electronic business.
E-commerce businesses may also employ some or all of the following:
Online shopping for retail sales direct to consumers via web sites and mobile apps, and conversational commerce via live chat, chatbots, and voice assistants;
Providing or participating in online marketplaces, which process third-party business-to-consumer (B2C) or consumer-to-consumer (C2C) sales;
Business-to-business (B2B) buying and selling;
Gathering and using demographic data through web contacts and social media;
B2B electronic data interchange;
Marketing to prospective and established customers by e-mail or fax (for example, with newsletters);
Engaging in pretail for launching new products and services;
Online financial exchanges for currency exchanges or trading purposes.
History and timeline
The term was coined and first employed by Dr. Robert Jacobson, Principal Consultant to the California State Assembly's Utilities & Commerce Committee, in the title and text of California's Electronic Commerce Act, carried by the late Committee Chairwoman Gwen Moore (D-L.A.) and enacted in 1984.
A timeline for the development of e-commerce:
1971 or 1972: The ARPANET is used to arrange a cannabis sale between students at the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology, later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse Said.
1976: Atalla Technovation (founded by Mohamed Atalla) and Bunker Ramo Corporation (founded by George Bunker and Simon Ramo) introduce products designed for secure online transaction processing, intended for financial institutions.
1979: Michael Aldrich demonstrates the first online shopping system.
1981: Thomson Holidays UK is the first business-to-business (B2B) online shopping system to be installed.
1982: Minitel was introduced nationwide in France by France Télécom and used for online ordering.
1983: California State Assembly holds first hearing on "electronic commerce" in Volcano, California. Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone, and Pacific Telesis. (Not permitted to testify is Quantum Technology, later to become AOL.) California's Electronic Commerce Act was passed in 1984.
1983: Karen Earle Lile (AKA Karen Bean) and Kendall Ross Bean create e-commerce service in San Francisco Bay Area. Buyers and sellers of pianos connect through a database created by Piano Finders on a Kaypro personal computer using DOS interface. Pianos for sale are listed on a Bulletin board system. Buyers print list of pianos for sale by a dot matrix printer. Customer service happened through a Piano Advice Hotline listed in the San Francisco Chronicle classified ads and money transferred by a bank wire transfer when a sale was completed.
1984: Gateshead SIS/Tesco is first B2C online shopping system and Mrs Snowball, 72, is the first online home shopper
1984: In April 1984, CompuServe launches the Electronic Mall in the US and Canada. It is the first comprehensive electronic commerce service.
1989: In May 1989, Sequoia Data Corp. introduced Compumarket, the first internet based system for e-commerce. Sellers and buyers could post items for sale and buyers could search the database and make purchases with a credit card.
1990: Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer.
1992: Book Stacks Unlimited in Cleveland opens a commercial sales website (www.books.com) selling books online with credit card processing.
1993: Paget Press releases edition No. 3 of the first app store, The Electronic AppWrapper
1994: Netscape releases the Navigator browser in October under the code name Mozilla. Netscape 1.0 is introduced in late 1994 with SSL encryption that made transactions secure.
1994: Ipswitch IMail Server becomes the first software available online for sale and immediate download via a partnership between Ipswitch, Inc. and OpenMarket.
1994: "Ten Summoner's Tales" by Sting becomes the first secure online purchase through NetMarket.
1995: The US National Science Foundation lifts its former strict prohibition of commercial enterprise on the Internet.
1995: Thursday 27 April 1995, the purchase of a book by Paul Stanfield, product manager for CompuServe UK, from W H Smith's shop within CompuServe's UK Shopping Centre is the UK's first national online shopping service secure transaction. The shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores (GUS), Interflora, Dixons Retail, Past Times, PC World (retailer) and Innovations.
1995: Amazon is launched by Jeff Bezos.
1995: eBay is founded by computer programmer Pierre Omidyar as AuctionWeb. It is the first online auction site supporting person-to-person transactions.
1995: The first commercial-free 24-hour, internet-only radio stations, Radio HK and NetRadio start broadcasting.
1996: The use of Excalibur BBS with replicated "storefronts" was an early implementation of electronic commerce started by a group of SysOps in Australia and replicated to global partner sites.
1998: Electronic postal stamps can be purchased and downloaded for printing from the Web.
1999: Alibaba Group is established in China. Business.com sold for US$7.5 million to eCompanies, which was purchased in 1997 for US$149,000. The peer-to-peer filesharing software Napster launches. ATG Stores launches to sell decorative items for the home online.
1999: Global e-commerce reaches $150 billion
2000: The dot-com bust.
2001: eBay has the largest userbase of any e-commerce site.
2001: Alibaba.com achieved profitability in December 2001.
2002: eBay acquires PayPal for $1.5 billion. Niche retail companies Wayfair and NetShops are founded with the concept of selling products through several targeted domains, rather than a central portal.
2003: Amazon posts first yearly profit.
2004: DHgate.com, China's first online B2B transaction platform, is established, forcing other B2B sites to move away from the "yellow pages" model.
2007: Business.com acquired by R.H. Donnelley for $345 million.
2014: US e-commerce and online retail sales projected to reach $294 billion, an increase of 12 percent over 2013 and 9% of all retail sales. Alibaba Group has the largest Initial public offering ever, worth $25 billion.
2015: Amazon accounts for more than half of all e-commerce growth, selling almost 500 Million SKU's in the US.
2017: Retail e-commerce sales across the world reaches $2.304 trillion, which was a 24.8 percent increase than previous year.
2017: Global e-commerce transactions generate , including for business-to-business (B2B) transactions and for business-to-consumer (B2C) sales.
Business application
Some common applications related to electronic commerce are:
Governmental regulation
In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, and the more recent California Privacy Act (2020) enacted through a popular election proposition, control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC.
The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.
Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996).
Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies.
There is also Asia Pacific Economic Cooperation (APEC) was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region.
In Australia, Trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong.
In the United Kingdom, The Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012.
In India, the Information Technology Act 2000 governs the basic applicability of e-commerce.
In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, The Administrative Measures on Internet Information Services released, is the first administrative regulation to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted The Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation.
Forms
Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C).
On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce.
Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used.
Global trends
In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel.
Among emerging economies, China's e-commerce presence continues to expand every year. With 668 million Internet users, China's online shopping sales reached $253 billion in the first half of 2015, accounting for 10% of total Chinese consumer retail sales in that period. The Chinese retailers have been able to help consumers feel more comfortable shopping online. e-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade. In 2013, Alibaba had an e-commerce market share of 80% in China. In 2014, there were 600 million Internet users in China (twice as many as in the US), making it the world's biggest online market. China is also the largest e-commerce market in the world by value of sales, with an estimated in 2016. Research shows that Chinese consumer motivations are different enough from Western audiences to require unique e-commerce app designs instead of simply porting Western apps into the Chinese market.
Recent research clearly indicates that electronic commerce, commonly referred to as e-commerce, presently shapes the manner in which people shop for products. The GCC countries have a rapidly growing market and are characterized by a population that becomes wealthier (Yuldashev). As such, retailers have launched Arabic-language websites as a means to target this population. Secondly, there are predictions of increased mobile purchases and an expanding internet audience (Yuldashev). The growth and development of the two aspects make the GCC countries become larger players in the electronic commerce market with time progress. Specifically, research shows that the e-commerce market is expected to grow to over $20 billion by the year 2020 among these GCC countries (Yuldashev). The e-commerce market has also gained much popularity among western countries, and in particular Europe and the U.S. These countries have been highly characterized by consumer-packaged goods (CPG) (Geisler, 34). However, trends show that there are future signs of a reverse. Similar to the GCC countries, there has been increased purchase of goods and services in online channels rather than offline channels. Activist investors are trying hard to consolidate and slash their overall cost and the governments in western countries continue to impose more regulation on CPG manufacturers (Geisler, 36). In these senses, CPG investors are being forced to adapt to e-commerce as it is effective as well as a means for them to thrive.
In 2013, Brazil's e-commerce was growing quickly with retail e-commerce sales expected to grow at a double-digit pace through 2014. By 2016, eMarketer expected retail e-commerce sales in Brazil to reach $17.3 billion. India has an Internet user base of about 460 million as of December 2017. Despite being the third largest user base in the world, the penetration of the Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around 6 million new entrants every month. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities. The India retail market is expected to rise from 2.5% in 2016 to 5% in 2020.
The future trends in the GCC countries will be similar to that of the western countries. Despite the forces that push business to adapt e-commerce as a means to sell goods and products, the manner in which customers make purchases is similar in countries from these two regions. For instance, there has been an increased usage of smartphones which comes in conjunction with an increase in the overall internet audience from the regions. Yuldashev writes that consumers are scaling up to more modern technology that allows for mobile marketing.
However, the percentage of smartphone and internet users who make online purchases is expected to vary in the first few years. It will be independent on the willingness of the people to adopt this new trend (The Statistics Portal). For example, UAE has the greatest smartphone penetration of 73.8 per cent and has 91.9 per cent of its population has access to the internet. On the other hand, smartphone penetration in Europe has been reported to be at 64.7 per cent (The Statistics Portal). Regardless, the disparity in percentage between these regions is expected to level out in future because e-commerce technology is expected to grow to allow for more users.
The e-commerce business within these two regions will result in competition. Government bodies at the country level will enhance their measures and strategies to ensure sustainability and consumer protection (Krings, et al.). These increased measures will raise the environmental and social standards in the countries, factors that will determine the success of the e-commerce market in these countries. For example, an adoption of tough sanctions will make it difficult for companies to enter the e-commerce market while lenient sanctions will allow ease of companies. As such, the future trends between GCC countries and the Western countries will be independent of these sanctions (Krings, et al.). These countries need to make rational conclusions in coming up with effective sanctions.
The rate of growth of the number of internet users in the Arab countries has been rapid – 13.1% in 2015. A significant portion of the e-commerce market in the Middle East comprises people in the 30–34 year age group. Egypt has the largest number of internet users in the region, followed by Saudi Arabia and Morocco; these constitute 3/4th of the region's share. Yet, internet penetration is low: 35% in Egypt and 65% in Saudi Arabia.
E-commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them.
In 2012, e-commerce sales topped $1 trillion for the first time in history.
Mobile devices are playing an increasing role in the mix of e-commerce, this is also commonly called mobile commerce, or m-commerce. In 2014, one estimate saw purchases made on mobile devices making up 25% of the market by 2017.
For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested an enormous volume of investment in mobile applications. The DeLone and McLean Model stated that three perspectives contribute to a successful e-business: information system quality, service quality and users' satisfaction. There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company.
Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying.
Logistics
Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs.
Contrary to common misconception, there are significant barriers to entry in e-commerce.
Impacts
Impact on markets and retailers
E-commerce markets are growing at noticeable rates. The online market is expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues are projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings.
E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacture. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery.
There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue.
Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting business' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords.
Impact on supply chain management
For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies.
E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions.
In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain.
Impact on employment
E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees.
Impact on customers
E-commerce brings convenience for customers as they do not have to leave home and only need to browse website online, especially for buying the products which are not sold in nearby shops. It could help customers buy wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online.
E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce.
However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues.
Impact on the environment
In 2018, E-commerce generated 1.3 million tons of container cardboard in North America, an increase from 1.1 million in 2017. Only 35 percent of North American cardboard manufacturing capacity is from recycled content. The recycling rate in Europe is 80 percent and Asia is 93 percent. Amazon, the largest user of boxes, has a strategy to cut back on packing material and has reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that doesn't require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials.
Impact on traditional retail
E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a "retail apocalypse." The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations.
Distribution channels
E-commerce has grown in importance as companies have adopted pure-click and brick-and-click channel systems. We can distinguish pure-click and brick-and-click channel system adopted by companies.
Pure-click or pure-play companies are those that have launched a website without any previous existence as a firm.
Bricks-and-clicks companies are those existing companies that have added an online site for e-commerce.
Click-to-brick online retailers that later open physical locations to supplement their online efforts.
E-commerce may take place on retailers' Web sites or mobile apps, or those of e-commerce marketplaces such as on Amazon, or Tmall from AliBaba. Those channels may also be supported by conversational commerce, e.g. live chat or chatbots on Web sites. Conversational commerce may also be standalone such as live chat or chatbots on messaging apps and via voice assistants.
Recommendation
The contemporary e-commerce trend recommends companies to shift the traditional business model where focus on "standardized products, homogeneous market and long product life cycle" to the new business model where focus on "varied and customized products". E-commerce requires the company to have the ability to satisfy multiple needs of different customers and provide them with wider range of products.
With more choices of products, the information of products for customers to select and meet their needs become crucial. In order to address the mass customization principle to the company, the use of recommender system is suggested. This system helps recommend the proper products to the customers and helps customers make the decision during the purchasing process. The recommender system could be operated through the top sellers on the website, the demographics of customers or the consumers' buying behavior. However, there are 3 main ways of recommendations: recommending products to customers directly, providing detailed products' information and showing other buyers' opinions or critiques. It is benefit for consumer experience without physical shopping. In general, recommender system is used to contact customers online and assist finding the right products they want effectively and directly.
E-commerce during COVID-19
In March 2020, global retail website traffic hit 14.3 billion visits signifying an unprecedented growth of e-commerce during the lockdown of 2020. Studies show that in the US, as many as 29% of surveyed shoppers state that they will never go back to shopping in person again; in the UK, 43% of consumers state that they expect to keep on shopping the same way even after the lockdown is over.
Retail sales of e-commerce shows that COVID-19 has a significant impact on e-commerce and its sales are expected to reach $6.5 trillion by 2023.
See also
Comparison of free software e-commerce web application frameworks
Comparison of shopping cart software
Customer intelligence
Digital economy
E-commerce credit card payment system
Electronic bill payment
Electronic money
Non-store retailing
Paid content
Payments as a service
Types of e-commerce
Timeline of e-commerce
South Dakota v. Wayfair, Inc.
References
Further reading
External links
E-commerce
Electronics industry
Non-store retailing
Retail formats
Supply chain management |
9104854 | https://en.wikipedia.org/wiki/Mklivecd | Mklivecd | mklivecd is a script for Linux distributions that allows for one to compile a "snapshot" of the current hard drive partition and all data which resides in it (all settings, applications, documents, bookmarks, etc.) and compress it into an ISO 9660 CD-image. This allows easy backup of a user's data and also makes it easy to create customized Linux-distribution. Some Linux-distributions like PCLinuxOS include a graphical frontend for easier script usage.
Used by
AmaroK Live CD
Dreamlinux
Mandriva Linux
Ruby on Rails Live CD
Unity Linux Live CD
See also
Live CD
Software remastering
Remastersys, a similar tool (for Debian/Ubuntu)
List of remastering software
External links
mklivecd source code
mklivecd project page (obsolete)
Backup software for Linux |
26224663 | https://en.wikipedia.org/wiki/Ciright%20Systems | Ciright Systems | Ciright Systems is an Information Technology Services company based in West Conshohocken, Pennsylvania United States.
Its flagship product is a Platform As A Service (PaaS) based Interoperable Cloud Platform ("The Ciright Platform") that provides office and business automation to small and medium-sized businesses. The Ciright Platform also provides immediate mobile extendability to an enterprise's legacy system.
Ciright Systems is a wholly owned subsidiary of Ciright, Inc. The company also has operations in Ahmedabad, India (Ciright Enterprise Pvt. Ltd., commonly referred to as "Ciright India").
History
Ciright Systems was founded in 1993 by Joseph Callahan (then known as Viewpoint Software) as a company that initially developed wireless applications on tablet devices and incorporated advances in pen computing and wireless technologies. Ciright has built solutions for hand held computers that are platform independent.
Ciright was founded in 1993, coding sales force and field worker applications for multiple specific vertical markets on the PenRight! Platform. A user could interface with a digital screen with a pen, physically mimicking the relationship of pen and paper.
In 1994, Ciright developed business process applications for the EO Personal Communicator in the GO Operating system. Partnered with AT&T Ciright positioned enterprise efficiency applications through the reduction of paper and the enhancement of communication as the EO tablet was also a phone capable of network communication of data as well as faxing technology integrated to OCR systems.
In 1995, the company was incorporated. That year, Ciright developed an application, While You Were Out, a pen based executive messaging software, that allowed an executive to receive his messages on a digital tablet device.
In 2000, Ciright developed a pen based solution provided for the Slate Vision hardware platform. Later, Ciright expanded to include professional services such as engineering firms, architects, and construction management. Finally in 2006 the system was migrated from a client–server environment into a total web based system.
In 2008, Ciright developed an application for the iPhone, available for the iPhone 3GS and iPhone 4 models. This app provides the user mobile access to their Ciright Platform data. In 2009, Ciright was awarded a multimillion-dollar contract from the Philadelphia Housing Authority to deploy a real time SCADA energy information monitoring and management system.
Ciright began development for an application on the iPad in 2010.
The Ciright Platform 13.2
The Ciright Platform (Version 13.2) automates the enterprise through optimizing data entry via wireless networks, so that focus is placed on appropriate time sensitive tasks that are correlated to industry-specific sales pipeline categories. This process enables enterprise management and leadership to track yesterday's business, monitors today's efficiency, and plan for tomorrow's growth.
Ciright is designed to manage and grow a business’ sales processes and operations. It leverages the information that is available to an organization and current technology to enhance management's ability to powerfully and intelligently identify the path to optimal profit and growth, and to direct the enterprise's sales force.
Scalability
The Ciright kernel supports scalability, and Ciright solutions are industry agnostic.
The Ciright Platform eliminates the need for business management software and software licensing; including, but not limited to, accounting software, desktop document, presentation, and spreadsheet programs, and business management packages.
Security
Ciright platform incorporates 512 bit encryption at the device level. This means that the computer which pushed the document to the cloud initially is the only computer that has the ability to access it later. All data on the Ciright platform is thereby protected, and it is also backed up in perpetuity.
U.S. Patent US8363618 B2 Content distribution platform
ABSTRACT A system is adapted to manage the distribution of content to one or more cooperating media/substrates. The system receives data representative of environment conditions for one or more cooperating media/substrates adapted to display digital content. The media/substrates may be located in public spaces. The system compares the received data representative of environment conditions with selection criteria to identify content for distribution to the media/substrates. The selected content is distributed to the one or more cooperating media/substrates.
Interactive digital signage pilot program
CDM was issued U.S. Patent No. 8,363,618 B2 on 29 January 2013 enabling the Pilot Program to give companies the unique opportunity to engage their target audiences in never-before possible ways. The VertNext Platform allows digital signage systems to measure, analyze, and re-render content in real time based on changing localized variables, including weather, traffic, sports scores, etc., as well as facial expressions, gestures, and anonymous demographics.
References
American companies established in 1993
Software companies based in Pennsylvania
Companies based in Philadelphia
Privately held companies based in Pennsylvania
Companies based in Conshohocken, Pennsylvania
1993 establishments in Pennsylvania
Software companies of the United States |
56019849 | https://en.wikipedia.org/wiki/Firewall%20%28Person%20of%20Interest%29 | Firewall (Person of Interest) | "Firewall" is the twenty-third episode and season finale of the first season of the American television drama series Person of Interest. It is the 23rd overall episode of the series and is written by Greg Plageman & Jonathan Nolan and directed by Richard J. Lewis. It aired on CBS in the United States and on CTV in Canada on May 17, 2012.
Plot
Finch (Michael Emerson) calls Carter (Taraji P. Henson), asking her to help Reese (Jim Caviezel). However, she is debriefed by Agent Donnelly (Brennan Brown) on their newest information: they've managed to locate Reese, who is seen escorting a woman in the streets.
A day ago, Reese and Finch receive a new number: Caroline Turing (Amy Acker), a psychologist. Reese begins surveillance on her by posing as a patient. Meanwhile, Fusco (Kevin Chapman) is summoned by HR and meets with councilman Larsson (Wayne Duvall) and Officer Patrick Simmons (Robert John Burke), who state that they will work in a murder for hire. An anonymous client assigned them to kill Turing and Fusco is notified to obstruct any investigation from the NYPD.
Reese gets in contact with Zoe Morgan (Paige Turco), who manages to get him information regarding any threat against Turing. Reese saves Turing from being killed by assailants in the street and takes her to a hotel for safety. His face is detected by the cameras and the FBI conduct an investigation and strike team to catch Reese. The hotel is surrounded by FBI operatives looking for Reese as well as HR members looking for Turing. With the help from Carter (Taraji P. Henson), Reese and Turing manage to evade the corps. While Finch assists in hacking the system, Alicia Corwin (Elizabeth Marvel) breaks into the Library.
Zoe interrogates one of the threats and finds that the person was blackmailed to act as a threat to Turing by an unknown benefactor. Reese manages to get Turing to escape the hotel through a water plant to reach Finch while he holds the HR shooters. Carter and Fusco arrive and help him chase HR in a high speed chase but Reese uses a detonator he hid earlier to explode HR's car. Finch waits for Turing in his car when he is confronted at gunpoint by Corwin, who demands him to shut down the Machine.
Suddenly, Corwin is shot in the head by Turing. Just as Zoe sneaks into Turing's office, her records are getting deleted, discovering that Turing is in fact "Root". Root was the person who hired HR to plan her own murder, which would make her a person of interest, as the real target was Finch. Reese arrives at Finch's location only to find Corwin's corpse. Fusco manages to send evidence to the NYPD about HR, managing the arrest of the mole inside the corp. Reese goes to a public surveillance camera, talking directly to the Machine, asking it to help him find Finch. A payphone rings nearby and Reese answers it.
Production
Writing
Co-writer and executive producer Greg Plageman deemed the scene where Root psychoanalyzes Reese as the toughest scene he ever wrote, saying "Both characters are lying about their true identity while trying to elicit personal information about the other. The fact that Root manages to hit so close to home about Reese's true nature is as fun as it is unsettling for him. And it's even more fun to watch in hindsight when you realize who she really is."
Reception
Viewers
In its original American broadcast, "Firewall" was seen by an estimated 13.47 million household viewers and gained a 2.5/7 ratings share among adults aged 18–49, according to Nielsen Media Research. This was a 4% increase in viewership from the previous episode, which was watched by 12.96 million viewers with a 2.6/7 in the 18-49 demographics. With these ratings, Person of Interest was the most watched show on CBS for the night beating The Mentalist, and Rules of Engagement, second on its timeslot and fourth for the night in the 18-49 demographics, behind Grey's Anatomy, a rerun of The Big Bang Theory, and American Idol.
Critical reviews
"Firewall" received mostly positive reviews from critics. Matt Fowler of IGN gave the episode a "great" 8.5 out of 10 and wrote, "'Firewall' was exciting, sure. And it felt like a lot of things were converging/culminating on an FBI hunt/HR level. But I think the biggest thing it had going for it, aside from the 'Root' reveal at the end, was Fusco and Carter finally finding out that they're both on the same side. In many respects, this first season was an exercise in team-building; creating a full pre-crime fighting support system while also enriching the bond between Reese and Finch. Now they can head into Season 2 as more of a force."
Phil Dyess-Nugent of The A.V. Club gave the episode a "B+" grade and wrote, "Person Of Interest has devoted some of its recent episodes to humanizing Reese and Finch by revealing what shame they keep bottled up inside and what losses they've suffered, and while some of this has been effective, the show might be bloodless and ice-cold if it weren't for the characters on the margins, who aren't as free of doubt and technical perfection as its heroes. Few series characters have been introduced in as imperfect form as Kevin Chapman's Fusco, but over the course of the season, he's grown from a singularly charmless, one-dimensional corrupt cop into a man relearning how good it feels to do good, and in the process, turned into the heart of the show."
Keysha Couzens of TV Overmind wrote "This first season of Person of Interest has been driven almost completely by the omnipotent presence of technology and its many uses to be both a blessing and a bane to man's existence. For the purposes of the series, it brought Reese and Finch together to help people, and now it has separated them in the season's final moments as the Machine has finally fallen into the wrong hands."
Luke Gelineau of TV Equals wrote "This is it, you guys! Person of Interest is ending its solid first season with tonight's episode: 'Firewall'. Reese and Finch have been through a lot in the last 22 episodes, and it looks like it's all coming to a head in tonight’s finale."
Sean McKenna of TV Fanatic gave the episode a 4.9 star rating out of 5 and wrote "The finale delivered a fantastic cap to that season illustrating everything that's great about the show, while simply turning it on its head, incorporating clever twists, and showing that it's only scratched the surface of its capabilities."
References
External links
"Firewall" at CBS
"Firewall" at TV Guide
Person of Interest (TV series) episodes
2012 American television episodes
Television episodes written by Jonathan Nolan |
50984053 | https://en.wikipedia.org/wiki/Tesla%20Autopilot | Tesla Autopilot | Tesla Autopilot is a suite of advanced driver-assistance system (ADAS) features offered by Tesla that amounts to Level 2 vehicle automation. Its features are lane centering, traffic-aware cruise control, automatic lane changes, semi-autonomous navigation on limited access freeways, self-parking, and the ability to summon the car from a garage or parking spot. In all of these features, the driver is responsible and the car requires constant supervision. The company claims the features reduce accidents caused by driver negligence and fatigue from long-term driving. In October 2020, Consumer Reports called Tesla Autopilot "a distant second" in driver assistance systems (behind Cadillac's Super Cruise), although it was ranked first in the "Capabilities and Performance" and "Ease of Use" category.
As an upgrade to the base Autopilot capabilities, the company's stated intent is to offer SAE Level 5 (full autonomous driving) at a future time, acknowledging that regulatory and technical hurdles must be overcome to achieve this goal. As of April 2020, most experts believe that Tesla vehicles lack the necessary hardware for fully autonomous driving. In October 2020, Tesla initiated and commissioned customers for a Full Self-Driving beta program in the United States; , it is being tested on public roads by 60,000 vehicles of employees and customers. Some industry observers criticized Tesla's decision to use untrained consumers to validate the beta software as dangerous and irresponsible. Similarly, collisions and deaths involving Tesla cars with Autopilot engaged have drawn the attention of the press and government agencies. In May 2021, Tesla was ranked last for both strategy and execution in the autonomous driving sector by Guidehouse Insights. Elon Musk, Tesla’s CEO, has made inaccurate predictions for when Tesla would achieve SAE Level 5 for years.
History
Elon Musk first discussed the Tesla Autopilot system publicly in 2013, noting that "Autopilot is a good thing to have in planes, and we should have it in cars." Over the ensuing decade, Autopilot went through a series of hardware and software enhancements, gradually approaching the goal of full autonomy, which as of January 2021, remained a work in progress.
In October 2014, Tesla offered customers the ability to pre-purchase Autopilot that was not designed for self-driving. Initial versions were built in partnership with Mobileye, but Mobileye ended the partnership in July 2016 because Tesla "was pushing the envelope in terms of safety".
Tesla cars manufactured after September 2014 had the initial hardware (hardware version 1 or HW1) that supported Autopilot. The first Autopilot software release came in October 2015 as part of Tesla software version 7.0. Version 7.1 removed some features to discourage risky driving.
Version 8.0 processed radar signals to create a point cloud similar to lidar to help navigate in low visibility. In November 2016, Autopilot 8.0 was updated to encourage drivers to grip the steering wheel. By November 2016, Autopilot had operated for 300 million miles (500 million km).
In October 2016, Autopilot sensors and computing hardware transitioned to hardware version 2 (HW2). Tesla used the term Enhanced Autopilot (EA) to refer to novel HW2 capabilities. In February 2017 Autopilot gained the ability to navigate freeways, to change lanes without driver input, to transition from one freeway to another, and to exit the freeway. It included traffic-aware cruise control, autosteer on divided highways, and autosteer on 'local roads' up to a speed of 45 mph. Software version 8.1 for HW2 arrived in March 2017, providing HW2 cars software parity with HW1 cars. The following August, Tesla announced hardware version 2.5 (HW2.5).
In March 2019, Tesla transitioned again, to hardware version 3 (HW3). To comply with the new United Nations Economic Commission for Europe regulation related to automatically commanded steering function, Tesla provided an updated Autopilot in May limited to Europe. In September, Tesla released software version 10 to Early Access Program (EAP) testers, citing improvements in driving visualization and automatic lane changes.
In September 2020, Tesla reintroduced the term Enhanced Autopilot to designate the subset of features applying to highway travel, parking, and summoning, whereas the Full-Self Driving option included navigation on city roads. Tesla released a "beta" version of its Full Self-Driving software in the United States in October 2020 to EAP testers.
Pricing
The initial price of basic Autopilot as of 2016, was $5,000. Full Self-Driving (FSD) was an additional $3,000. Later basic Autopilot was included in every Tesla and the additional FSD was $8,000 growing to $10,000 in 2021 and $12,000 in 2022. The company offered a monthly subscription for FSD in 2021, at a price of $200 per month. As the price increased, the fraction of owners who purchased it steadily declined, to 12% in 2021, down from 22% in 2020 and 37% in 2019.
Executive turnover
There has been reportedly high turnover in the role for leading the Autopilot team, with as many as five executives holding the position in a 4-year period.
2015–2016: Sterling Anderson left to start a competing company.
2017–2017: Chris Lattner left after six months due to cultural fit issues.
2017–2018: Jim Keller left to join Intel.
Starting in 2018, the executives are:
Andrej Karpathy (Director of Artificial Intelligence), leading the Autopilot software team, alongside Ashok Elluswamy (Director, Autopilot Software) and Milan Kovac (Director, Autopilot Software Engineering).
Pete Bannon, leading Autopilot hardware.
Full Self-Driving
Full Self-Driving (FSD) is an upgrade package to Autopilot offering additional ADAS features. , the beta FSD software is available to employees, early access program members, and more than ten thousand opt-in users who met certain safety score criteria.
Approach
Tesla's approach to try to achieve SAE Level 5 is to train a neural network using the behavior of hundreds of thousands of Tesla drivers using chiefly visible light cameras and information from components used for other purposes in the car (the coarse-grained two-dimensional maps used for navigation; the ultrasonic sensors used for parking, etc.) Tesla has made a deliberate decision to not use lidar, which Elon Musk has called "stupid, expensive and unnecessary". This makes Tesla's approach markedly different from that of other companies like Waymo and Cruise which train their neural networks using the behavior of highly trained drivers, and are additionally relying on highly detailed (centimeter-scale) three-dimensional maps and lidar in their autonomous vehicles.
According to Elon Musk, full autonomy is "really a software limitation: The hardware exists to create full autonomy, so it's really about developing advanced, narrow AI for the car to operate on." The Autopilot development focus is on "increasingly sophisticated neural nets that can operate in reasonably sized computers in the car". According to Musk, "the car will learn over time", including from other cars.
Tesla's software has been trained based on 3 billion miles driven by Tesla vehicles on public roads, . Alongside tens of millions of miles on public roads, competitors have trained their software on tens of billions of miles in computer simulations, as of January 2020. In terms of computing hardware, Tesla designed a self-driving computer chip that has been installed in its cars since March 2019 and also developed a neural network training supercomputer; other vehicle automation companies such as Waymo regularly use custom chipsets and neural networks as well.
Criticism
Tesla's self-driving strategy has been criticized as dangerous and obsolete as it was abandoned by other companies years ago. Most experts believe that Tesla's approach of trying to achieve autonomous vehicles by eschewing high-definition maps and lidar is not feasible. Auto analyst Brad Templeton has criticized Tesla's approach by arguing, "The no-map approach involves forgetting what was learned before and doing it all again." In a May 2021 study by Guidehouse Insights, Tesla was ranked last for both strategy and execution in the autonomous driving sector. Some news reports in 2019 state "practically everyone views [lidar] as an essential ingredient for self-driving cars" and "experts and proponents say it adds depth and vision where camera and radar alone fall short."
An August 2021 study conducted by Missy Cummings et al found three Tesla Model 3 cars exhibited "significant between and within vehicle variation on a number of metrics related to driver monitoring, alerting, and safe operation of the underlying autonomy... suggest[ing] that the performance of the underlying artificial intelligence and computer vision systems was extremely variable."
In July 2020, German authorities ruled that Tesla misled consumers regarding the "abilities of its automated driving systems" and banned it from using certain marketing language implying autonomous driving capabilities.
In September 2021, legal scholars William Widen and Philip Koopman argued that Tesla has misrepresented FSD as SAE Level 2 to "avoid regulatory oversight and permitting processes required of more highly automated vehicles". Instead, they argued FSD should be considered a SAE Level 4 technology and urged state Departments of Transportation in the U.S. to classify it as such since publicly available videos show that "beta test drivers operate their vehicles as if to validate SAE Level 4 (high driving automation) features, often revealing dramatically risky situations created by use of the vehicles in this manner."
Predictions and deployment
In March 2015, speaking at an Nvidia conference, Musk stated:
"I don't think we have to worry about autonomous cars because it's a sort of a narrow form of AI. It's not something I think is very difficult. To do autonomous driving that is to a degree much safer than a person, is much easier than people think." "... I almost view it like a solved problem."
In December 2015, Musk predicted "complete autonomy" by 2018. At the end of 2016, Tesla expected to demonstrate full autonomy by the end of 2017, and in April 2017, Musk predicted that in around two years, drivers would be able to sleep in their vehicle while it drives itself. In 2018 Tesla revised the date to demonstrate full autonomy to be by the end of 2019.
In February 2019, Musk stated that Tesla's FSD capability would be "feature complete" by the end of 2019:
In January 2020, Musk claimed the FSD software would be "feature complete" by the end of 2020, adding that feature complete "doesn't mean that features are working well". In August 2020, Musk stated that 200 software engineers, 100 hardware engineers and 500 "labelers" were working on Autopilot and FSD. In early 2021, Musk stated that Tesla would provide SAE Level 5 autonomy by the end of 2021 and that Tesla plans to release a monthly subscription package for FSD in 2021. An email conversation between Tesla and the California Department of Motor Vehicles retrieved via a Freedom of Information Act request by PlainSite contradicts Musk's forward-looking statement.
Full Self-Driving beta
In October 2020, Tesla released a beta version of its FSD software to EAP testers, a small group of users in the United States. Musk stated that the testing of FSD beta "[w]ill be extremely slow [and] cautious" and "be limited to a small number of people who are expert & careful drivers". The release of the beta program has renewed concern regarding whether the technology is ready for testing on public roads. In January 2021, the number of employees and customers testing the beta FSD software was "nearly 1,000" and in May 2021 a couple thousand employees and customers. In October 2021, Tesla started the wide release of the FSD Beta to about 1,000 more drivers in the US. The beta became accessible to Tesla drivers who achieved a 100 / 100 on a proprietary safety scoring system.
As of November 2021 there were about 11,700 FSD beta testers and about 150,000 vehicles using Tesla's safety score system. , there are 60,000 users participating in FSD beta.
Tesla Dojo
Tesla Dojo (or Project Dojo) is an artificial intelligence (AI) neural network training supercomputer announced by Musk on Tesla's AI Day on August 19, 2021. It had previously been mentioned by Musk in April 2019 and August 2020. According to Musk, Project Dojo will be operational in 2022.
The Dojo supercomputer will use Tesla D1 chips, designed and produced by Tesla. According to Tesla's senior director of Autopilot hardware, Ganesh Venkataramanan, the chip uses a "7-nanometer manufacturing process, with 362 teraflops of processing power", and "Tesla places 25 of these chips onto a single 'training tile', and 120 of these tiles come together... amounting to over an exaflop [a million teraflops] of power". Tesla claims that Dojo will be the fastest AI-training computer among competing offerings from Intel and Nvidia. , Nvidia says the current Tesla AI-training center uses 720 nodes of eight Nvidia A100 Tensor Core GPUs (5,760 GPUs in total) for up to 1.8 exaflops of performance.
Gartner research vice president Chirag Dekate said, "The Tesla Dojo is an AI-specific supercomputer designed to accelerate machine learning and deep learning activities. Its lower precision focus limits applicability to a broader high-performance computer (HPC) context." He also said that Dojo's reported capabilities don't grant it true HPC status, largely because it hasn't been tested using the same standards as Fugaku and other supercomputers. Dylan Patel from Semi Analysis suggests that while the input/output is impressive, the amount of memory is inadequate, and the two most difficult issues (the software compiler and tile-to-tile interconnects) remain to be solved.
In September 2021, Tesla Dojo whitepaper was released.
Driving features
Tesla's Autopilot is classified as Level 2 under the SAE International six levels (0 to 5) of vehicle automation. At this level, the car can act autonomously, but requires the driver to monitor the driving at all times and be prepared to take control at a moment's notice. Tesla's owner's manual states that Autopilot should not be used on city streets or on roads where traffic conditions are constantly changing; however, some current FSD capabilities ("traffic and stop sign control (beta)"), and future FSD capabilities ("autosteer on city streets") are advertised for city streets.
Hardware
Summary
Notes
Hardware 1
Vehicles manufactured after late September 2014 are equipped with a camera mounted at the top of the windshield, forward looking radar in the lower grille and ultrasonic acoustic location sensors in the front and rear bumpers that provide a 360-degree view around the car. The computer is the Mobileye EyeQ3. This equipment allows the Tesla Model S to detect road signs, lane markings, obstacles, and other vehicles.
Auto lane change can be initiated by the driver turning on the lane changing signal when safe (due to ultrasonic 16-foot limited range capability), and then the system completes the lane change. In 2016 the HW1 did not detect pedestrians or cyclists, and while Autopilot detects motorcycles, there have been two instances of HW1 cars rear-ending motorcycles.
Upgrading from Hardware 1 to Hardware 2 is not offered as it would require substantial work and cost.
Hardware 2
HW2, included in all vehicles manufactured after October 2016, includes an Nvidia Drive PX 2 GPU for CUDA based GPGPU computation. Tesla claimed that HW2 provided the necessary equipment to allow FSD capability at SAE Level 5. The hardware includes eight surround cameras and 12 ultrasonic sensors, in addition to forward-facing radar with enhanced processing capabilities. The Autopilot computer is replaceable to allow for future upgrades. The radar is able to observe beneath and ahead of the vehicle in front of the Tesla; the radar can see vehicles through heavy rain, fog or dust. Tesla claimed that the hardware was capable of processing 200 frames per second.
When "Enhanced Autopilot" was enabled in February 2017 by the v8.0 (17.5.36) software update, testing showed the system was limited to using one of the eight onboard cameras—the main forward-facing camera. The v8.1 software update released a month later enabled a second camera, the narrow-angle forward-facing camera.
Hardware 2.5
In August 2017, Tesla announced that HW2.5 included a secondary processor node to provide more computing power and additional wiring redundancy to slightly improve reliability; it also enabled dashcam and sentry mode capabilities.
Hardware 3
According to Tesla's director of Artificial Intelligence Andrej Karpathy, Tesla had as of Q3 2018 trained large neural networks that work but which could not be deployed to Tesla vehicles built up to that time due to their insufficient computational resources. HW3 provides the necessary resources to run these neural networks.
HW3 includes a custom Tesla-designed system on a chip fabricated using 14 nm process by Samsung. Jim Keller and Pete Bannon among other architects have led the project since February 2016 and took over the course of 18 months. Tesla claimed that the new system processes 2,300 frames per second (fps), which is a 21× improvement over the 110 fps image processing capability of HW2.5. The firm described it as a "neural network accelerator". Each chip is capable of 36 trillion operations per second, and there are two chips for redundancy. The company claimed that HW3 was necessary for FSD, but not for "enhanced Autopilot" functions.
The first availability of HW3 was April 2019. Customers with HW2 or HW2.5 who purchased the FSD package are eligible for an upgrade to HW3 without cost.
Tesla claims HW3 has 2.5× improved performance over HW2.5 with 1.25× higher power and 0.2× lower cost. HW3 features twelve ARM Cortex-A72 CPUs operating at 2.6 GHz, two Neural Network Accelerators operating at 2 GHz and a Mali GPU operating at 1 GHz.
Tesla Vision
In late May 2021, Elon Musk posted to Twitter that "Pure Vision Autopilot" was starting to be implemented. The system, which Tesla brands "Tesla Vision", eliminates the forward-facing radar from the Autopilot hardware package on Model 3 and Model Y vehicles built for the North American market and delivered in and after May 2021. For vehicles without the forward radar, temporary limitations were applied to certain features such as Autosteer, and other features (Smart Summon and Emergency Lane Departure Avoidance) were disabled, but Tesla promised to restore the features "in the weeks ahead ... via a series of over-the-air software updates". In response, the U.S. National Highway Traffic Safety Administration (NHTSA) rescinded the agency's check marks for forward collision warning, automatic emergency braking, lane departure warning, and dynamic brake support, applicable to Model 3 and Model Y vehicles built on or after April 27, 2021. Consumer Reports delisted the Model 3 from its Top Picks, and the Insurance Institute for Highway Safety (IIHS) announced plans to delist the Model 3 as a Top Safety Pick+, but after further testing, both organizations restored those designations.
In December 2021, the New York Times reported that Musk was the decisionmaker behind the camera-only approach and had "repeatedly told members of the Autopilot team that humans could drive with only two eyes and that this meant cars should be able to drive with cameras alone." Several autonomous vehicle experts were quoted denouncing the analogy.
Comparisons
In 2018, Consumer Reports rated Tesla Autopilot as second best out of four (Cadillac, Tesla, Nissan, Volvo) "partially automated driving systems". Autopilot scored highly for its capabilities and ease of use, but was worse at keeping the driver engaged than the other manufacturers' systems. Consumer Reports also found multiple problems with Autopilot's automatic lane change function, such as cutting too closely in front of other cars and passing on the right.
In 2018, the Insurance Institute for Highway Safety compared Tesla, BMW, Mercedes and Volvo "advanced driver assistance systems" and stated that the Tesla Model 3 experienced the fewest incidents of crossing over a lane line, touching a lane line, or disengaging.
In February 2020, Car and Driver compared Cadillac Super Cruise, comma.ai and Autopilot. They called Autopilot "one of the best", highlighting its user interface and versatility, but criticizing it for swerving abruptly.
In June 2020, Digital Trends compared Cadillac Super Cruise self-driving and Tesla Autopilot. The conclusion: "Super Cruise is more advanced, while Autopilot is more comprehensive."
In October 2020, the European New Car Assessment Program gave the Tesla Model 3 Autopilot a score of "moderate".
Also in October 2020, Consumer Reports evaluated 17 driver assistance systems, and concluded that Tesla Autopilot was "a distant second" behind Cadillac's Super Cruise, although Autopilot was ranked first in the "Capabilities and Performance" and "Ease of Use" categories.
In February 2021, a MotorTrend review compared Super Cruise and Autopilot and said Super Cruise was better, primarily due to safety.
Safety concerns
The National Transportation Safety Board (NTSB) criticized Tesla's lack of system safeguards in a fatal 2018 Autopilot crash in California, and for failing to foresee and prevent "predictable abuse" of Autopilot. The Center for Auto Safety and Consumer Watchdog called for federal and state investigations into Autopilot and Tesla's marketing of the technology, which they believe is "dangerously misleading and deceptive", giving consumers the false impression that their vehicles are self-driving or autonomous. UK safety experts called Tesla's Autopilot "especially misleading". A 2019 IIHS study showed that the name "Autopilot" causes more drivers to misperceive behaviors such as texting or taking a nap to be safe, versus similar level 2 driver-assistance systems from other car companies. Tesla's Autopilot and FSD features were criticized in a May 2020 report published on ScienceDirect titled "Autonowashing: The Greenwashing of Vehicle Automation".
In June 2021, the NHTSA announced an order requiring automakers to report crashes involving vehicles equipped with ADAS features in the United States. This order came amid increased regulatory scrutiny of such systems, especially Tesla Autopilot. An MIT study published in September 2021 found that Autopilot is not as safe as Tesla claims, and led to drivers becoming inattentive.
Driver monitoring
Drivers have been found sleeping at the wheel, driving under the influence of alcohol, and doing other inappropriate tasks with Autopilot engaged. Initially, Tesla decided against using driver monitoring options to limit such activities. It was not until late May 2021 that a new version of the OTA software turned on inside cameras for new Model 3 and Model Y (i.e. the first cars as part of the switch to Tesla Vision) to monitor drivers using Autopilot. Model S and Model X cars made before 2021 do not have an inside camera and therefore physically cannot offer such capabilities, although the refreshed versions are expected to have one. A review of the in-cabin camera-based monitoring system by Consumer Reports found that drivers could still use Autopilot even when looking away from the road or using their phones, and could also enable FSD beta software "with the camera covered."
Detecting stationary vehicles at speed
Autopilot may not detect stationary vehicles; the manual states: "Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead." This has led to numerous crashes with stopped emergency vehicles. This is the same problem that any car equipped with just adaptive cruise control or automated emergency braking has (for example, Volvo Pilot Assist).
Dangerous and unexpected behavior
In a 2019 Bloomberg survey, hundreds of Tesla owners reported dangerous behaviors with Autopilot, such as phantom braking, veering out of lane, or failing to stop for road hazards. Regarding specifically phantom breaking, The Washington Post published an article in February 2022 detailing a surge in complaints to NHTSA over false positives to its automatic emergency-braking system. Autopilot users have also reported the software crashing and turning off suddenly, collisions with off ramp barriers, radar failures, unexpected swerving, tailgating, and uneven speed changes.
Ars Technica notes that the brake system tends to initiate later than some drivers expect. One driver claimed that Tesla's Autopilot failed to brake, resulting in collisions, but Tesla pointed out that the driver deactivated the cruise control of the car prior to the crash. Ars Technica also notes that while lane changes may be semi-automatic (if Autopilot is on, and the vehicle detects slow moving cars or if it is required to stay on route, the car may automatically change lanes without any driver input), the driver must show the car that he or she is paying attention by touching the steering wheel before the car makes the change. In 2019, Consumer Reports found that Tesla's automatic lane-change feature is "far less competent than a human driver".
In October 2021, version 10.3 of the "Full Self-Driving" beta software was released with different driving profiles to control vehicle behavior, branded 'Chill', 'Average', and 'Assertive'; the 'Assertive' profile attracted negative coverage in January 2022 for advertising that it "may perform rolling stops" (passing through stop signs at up to 5.6 mph), frequent lane changes, and decrease the following distance. On February 1, after the NHTSA advised Tesla that failing to stop for a stop sign can increase the risk of a crash and threatened "immediate action" for "intentional design choices that are unsafe", Tesla recalled nearly 54,000 vehicles to disable the rolling stop behavior. The recall will be implemented through software.
Regulation
A spokesman for the NHTSA said that "any autonomous vehicle would need to meet applicable federal motor vehicle safety standards" and the NHTSA "will have the appropriate policies and regulations in place to ensure the safety of this type of vehicles". On February 1, 2021, Robert Sumwalt, chair of the NTSB, wrote a letter to NHTSA regarding that agency's "Framework for Automated Driving System Safety", which had been published for comment in December 2020. In the letter, Sumwalt recommended that NHTSA include user monitoring as part of the safety framework and reiterated that "Tesla's lack of appropriate safeguards and NHTSA's inaction" to act on the NTSB's recommendation "that NHTSA develop a method to verify that manufacturers of vehicles equipped with Level 2 incorporate system safeguards that limit the use of automated vehicle control systems to the conditions for which they were designed" was a contributing cause to a fatal crash of a vehicle in Delray Beach, Florida.
NHTSA announced Standing General Order 2021-01 on June 29, 2021. Under this General Order, manufacturers and operators of vehicles equipped with advanced driver assistance systems (ADAS, SAE J3016 Level 2) or automated driving systems (ADS, SAE Level 3 or higher) are required to report crashes. An amended order was issued and became effective on August 12. Reporting is limited to crashes where the ADAS or ADS was engaged within 30 seconds prior to the crash that involve a injury that requires hospitalization, a fatality, a vehicle being towed from the scene, an air bag deployment, or involving a "vulnerable road user" (e.g., pedestrian or bicyclist); these crashes are required to be reported to NHTSA within one calendar day, and an updated report is required within 10 calendar days.
Court cases
Tesla's Autopilot was the subject of a class action suit brought in 2017 that claimed the second-generation Enhanced Autopilot system was "dangerously defective". The suit was settled in 2018; owners who had paid in 2016 and 2017 to equip their cars with the updated Autopilot software were compensated between $20 and $280 for the delay in implementing Autopilot 2.0.
In July 2020, a German court ruled that Tesla made exaggerated promises about its Autopilot technology, and that the "Autopilot" name created the false impression that the car can drive itself.
On August 16, 2021, after reports of 17 injuries and one death in car crashes involving emergency vehicles, the US auto safety regulators opened a formal safety probe into Tesla's driver assistance system Autopilot.
Safety statistics
In 2016, data after 47 million miles of driving in Autopilot mode showed the probability of an accident was at least 50% lower when using Autopilot. During the investigation into the fatal crash of May 2016 in Williston, Florida, NHTSA released a preliminary report in January 2017 stating "the Tesla vehicles' crash rate dropped by almost 40 percent after Autosteer installation." Disputing this, in 2019, a private company, Quality Control Systems, released their report analyzing the same data, stating the NHTSA conclusion was "not well-founded". Quality Control Systems' analysis of the data showed the crash rate (measured in the rate of airbag deployments per million miles of travel) actually increased from 0.76 to 1.21 after the installation of Autosteer. Additionally, a statistical analysis conducted in A Methodology for Normalizing Safety Statistics of Partially Automated Vehicles debunked the 40 percent rate claim by accounting for the relative safety of the given operating domain when using active safety measures.
In February 2020, Andrej Karpathy, Tesla's head of AI and computer vision, stated that: Tesla cars have driven 3 billion miles on Autopilot, of which 1 billion have been driven using Navigate on Autopilot; Tesla cars have performed 200,000 automated lane changes; and 1.2 million Smart Summon sessions have been initiated with Tesla cars. He also stated that Tesla cars are avoiding pedestrian accidents at a rate of tens to hundreds per day.
NHTSA investigations
According to a document released in June 2021, the NHTSA has initiated at least 30 investigations into Tesla crashes that were believed to involve the use of Autopilot, with some involving fatalities.
In August 2021, the NHTSA Office of Defects Investigation (ODI) opened a preliminary evaluation (PE 21-020) and released a list of eleven crashes involving Tesla vehicles striking emergency vehicles; in each instance, NHTSA confirmed that Autopilot or Traffic Aware Cruise Control were active during the approach to the crashes. Of the eleven crashes, seven resulted in seventeen injuries, and one resulted in one fatality. NHTSA planned to evaluate the Autopilot system, specifically the systems used to monitor and enforce driver engagement. In September, NHTSA added a twelfth accident to the investigation list.
NHTSA sent a request for information relating to PE 21-020 to Tesla's director of field quality on August 31, 2021. The response was due by October 22. On September 13, NHTSA sent a request for information to other automobile manufacturers for comparative ADAS data. After Tesla deployed its Emergency Light Detection Update in September 2021, NHTSA sent a follow-up letter to Tesla was on October 12 asking for "a chronology of events, internal investigations, and studies" that led to the deployment of the update.
Notable crashes
Fatal crashes
As of January 2022, there have been twelve verified fatalities involving Tesla's Autopilot, though other deadly incidents where Autopilot use was suspected remain outstanding.
Handan, Hebei, China (January 20, 2016)
On January 20, 2016, the driver of a Tesla Model S in Handan, Hebei, China, was killed when his car crashed into a stationary truck. The Tesla was following a car in the far left lane of a multi-lane highway; the car in front moved to the right lane to avoid a truck stopped on the left shoulder, and the Tesla, which the driver's father believes was in Autopilot mode, did not slow before colliding with the stopped truck. According to footage captured by a dashboard camera, the stationary street sweeper on the left side of the expressway partially extended into the far left lane, and the driver did not appear to respond to the unexpected obstacle.
In September 2016, the media reported the driver's family had filed a lawsuit in July against the Tesla dealer who sold the car. The family's lawyer stated the suit was intended "to let the public know that self-driving technology has some defects. We are hoping Tesla when marketing its products, will be more cautious. Do not just use self-driving as a selling point for young people." Tesla released a statement which said they "have no way of knowing whether or not Autopilot was engaged at the time of the crash" since the car telemetry could not be retrieved remotely due to damage caused by the crash. In 2018, the lawsuit was stalled because telemetry was recorded locally to a SD card and was not able to be given to Tesla, who provided a decoding key to a third party for independent review. Tesla stated that "while the third-party appraisal is not yet complete, we have no reason to believe that Autopilot on this vehicle ever functioned other than as designed." Chinese media later reported that the family sent the information from that card to Tesla, which admitted autopilot was engaged two minutes before the crash. Tesla since then removed the term "Autopilot" from its Chinese website.
Williston, Florida, USA (May 7, 2016)
On May 7, 2016, a Tesla driver was killed in a crash with an 18-wheel tractor-trailer in Williston, Florida. By late June 2016, the NHTSA opened a formal investigation into the fatal autonomous accident, working with the Florida Highway Patrol. According to the NHTSA, preliminary reports indicate the crash occurred when the tractor-trailer made a left turn in front of the 2015 Tesla Model S at an intersection on a non-controlled access highway, and the car failed to apply the brakes. The car continued to travel after passing under the truck's trailer. The Tesla was eastbound in the rightmost lane of US 27, and the westbound tractor-trailer was turning left at the intersection with NE 140th Court, approximately west of Williston; the posted speed limit is .
The diagnostic log of the Tesla indicated it was traveling at a speed of when it collided with and traveled under the trailer, which was not equipped with a side underrun protection system. A reconstruction of the accident estimated the driver would have had approximately 10.4 seconds to detect the truck and take evasive action. The underride collision sheared off the Tesla's glasshouse, destroying everything above the beltline, and caused fatal injuries to the driver. In the approximately nine seconds after colliding with the trailer, the Tesla traveled another and came to rest after colliding with two chain-link fences and a utility pole.
The NHTSA's preliminary evaluation was opened to examine the design and performance of any automated driving systems in use at the time of the crash, which involves a population of an estimated 25,000 Model S cars. On July 8, 2016, the NHTSA requested Tesla Inc. to hand over to the agency detailed information about the design, operation and testing of its Autopilot technology. The agency also requested details of all design changes and updates to Autopilot since its introduction, and Tesla's planned updates scheduled for the next four months.
According to Tesla, "neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied." The car attempted to drive full speed under the trailer, "with the bottom of the trailer impacting the windshield of the Model S". Tesla also stated that this was Tesla's first known Autopilot-related death in over 130 million miles (208 million km) driven by its customers while Autopilot was activated. According to Tesla there is a fatality every 94 million miles (150 million km) among all type of vehicles in the U.S. It is estimated that billions of miles will need to be traveled before Tesla Autopilot can claim to be safer than humans with statistical significance. Researchers say that Tesla and others need to release more data on the limitations and performance of automated driving systems if self-driving cars are to become safe and understood enough for mass-market use.
The truck's driver told the Associated Press that he could hear a Harry Potter movie playing in the crashed car, and said the car was driving so quickly that "he went so fast through my trailer I didn't see him. [The film] was still playing when he died and snapped a telephone pole a quarter-mile down the road." According to the Florida Highway Patrol, they found in the wreckage an aftermarket portable DVD player. It is not possible to watch videos on the Model S touchscreen display. A laptop computer was recovered during the post-crash examination of the wreck, along with an adjustable vehicle laptop mount attached to the front passenger's seat frame. The NHTSA concluded the laptop was probably mounted and the driver may have been distracted at the time of the crash.
In January 2017, the NHTSA Office of Defects Investigations (ODI) released a preliminary evaluation, finding that the driver in the crash had seven seconds to see the truck and identifying no defects in the Autopilot system; the ODI also found that the Tesla car crash rate dropped by 40 percent after Autosteer installation, but later also clarified that it did not assess the effectiveness of this technology or whether it was engaged in its crash rate comparison. The NHTSA Special Crash Investigation team published its report in January 2018. According to the report, for the drive leading up to the crash, the driver engaged Autopilot for 37 minutes and 26 seconds, and the system provided 13 "hands not detected" alerts, to which the driver responded after an average delay of 16 seconds. The report concluded "Regardless of the operational status of the Tesla's ADAS technologies, the driver was still responsible for maintaining ultimate control of the vehicle. All evidence and data gathered concluded that the driver neglected to maintain complete control of the Tesla leading up to the crash."
In July 2016, the NTSB announced it had opened a formal investigation into the fatal accident while Autopilot was engaged. The NTSB is an investigative body that only has the power to make policy recommendations. An agency spokesman said, "It's worth taking a look and seeing what we can learn from that event, so that as that automation is more widely introduced we can do it in the safest way possible." The NTSB opens annually about 25 to 30 highway investigations. In September 2017, the NTSB released its report, determining that "the probable cause of the Williston, Florida, crash was the truck driver's failure to yield the right of way to the car, combined with the car driver's inattention due to overreliance on vehicle automation, which resulted in the car driver's lack of reaction to the presence of the truck. Contributing to the car driver's overreliance on the vehicle automation was its operational design, which permitted his prolonged disengagement from the driving task and his use of the automation in ways inconsistent with guidance and warnings from the manufacturer."
Mountain View, California, USA (March 23, 2018)
On March 23, 2018, a second U.S. Autopilot fatality occurred in Mountain View, California. The crash occurred just before 9:30 A.M. on southbound US 101 at the carpool lane exit for southbound Highway 85, at a concrete barrier where the left-hand carpool lane offramp separates from 101. After the Model X crashed into the narrow concrete barrier, it was struck by two following vehicles, and then it caught on fire.
Both the NHTSA and NTSB began investigations into the March 2018 crash. Another driver of a Model S demonstrated that Autopilot appeared to be confused by the road surface marking in April 2018. The gore ahead of the barrier is marked by diverging solid white lines (a vee-shape) and the Autosteer feature of the Model S appeared to mistakenly use the left-side white line instead of the right-side white line as the lane marking for the far left lane, which would have led the Model S into the same concrete barrier had the driver not taken control. Ars Technica concluded that "as Autopilot gets better, drivers could become increasingly complacent and pay less and less attention to the road."
In a corporate blog post, Tesla noted the impact attenuator separating the offramp from US 101 had been previously crushed and not replaced prior to the Model X crash on March 23. The post also stated that Autopilot was engaged at the time of the crash, and the driver's hands had not been detected manipulating the steering wheel for six seconds before the crash. Vehicle data showed the driver had five seconds and a "unobstructed view of the concrete divider, ... but the vehicle logs show that no action was taken." The NTSB investigation had been focused on the damaged impact attenuator and the vehicle fire after the collision, but after it was reported the driver had complained about the Autopilot functionality, the NTSB announced it would also investigate "all aspects of this crash including the driver's previous concerns about the autopilot". A NTSB spokesman stated the organization "is unhappy with the release of investigative information by Tesla". Elon Musk dismissed the criticism, tweeting that NTSB was "an advisory body" and that "Tesla releases critical crash data affecting public safety immediately & always will. To do otherwise would be unsafe." In response, NTSB removed Tesla as a party to the investigation on April 11.
NTSB released a preliminary report on June 7, 2018, which provided the recorded telemetry of the Model X and other factual details. Autopilot was engaged continuously for almost nineteen minutes prior to the crash. In the minute before the crash, the driver's hands were detected on the steering wheel for 34 seconds in total, but his hands were not detected for the six seconds immediately preceding the crash. Seven seconds before the crash, the Tesla began to steer to the left and was following a lead vehicle; four seconds before the crash, the Tesla was no longer following a lead vehicle; and during the three seconds before the crash, the Tesla's speed increased to . The driver was wearing a seatbelt and was pulled from the vehicle before it was engulfed in flames.
The crash attenuator had been previously damaged on March 12 and had not been replaced at the time of the Tesla crash. The driver involved in the accident on March 12 collided with the crash attenuator at more than and was treated for minor injuries; in comparison, the driver of the Tesla collided with the collapsed attenuator at a slower speed and died from blunt force trauma. After the accident on March 12, the California Highway Patrol failed to report the collapsed attenuator to Caltrans as required. Caltrans was not aware of the damage until March 20, and the attenuator was not replaced until March 26 because a spare was not immediately available. This specific attenuator had required repair more often than any other crash attenuator in the Bay Area, and maintenance records indicated that repair of this attenuator was delayed by up to three months after being damaged. As a result, the NTSB released a Safety Recommendation Report on September 9, 2019, asking Caltrans to develop and implement a plan to guarantee timely repair of traffic safety hardware.
At a NTSB meeting held on February 25, 2020, the board concluded the crash was caused by a combination of the limitations of the Tesla Autopilot system, the driver's over-reliance on Autopilot, and driver distraction likely from playing a video game on his phone. The vehicle's ineffective monitoring of driver engagement was cited as a contributing factor, and the inoperability of the crash attenuator contributed to the driver's injuries. As an advisory agency, NTSB does not have regulatory power; however, NTSB made several recommendations to two regulatory agencies. The NTSB recommendations to the NHTSA included: expanding the scope of the New Car Assessment Program to include testing of forward collision avoidance systems; determining if "the ability to operate [Tesla Autopilot-equipped vehicles] outside the intended operational design domain pose[s] an unreasonable risk to safety"; and developing driver monitoring system performance standards. The NTSB submitted recommendations to the OSHA relating to distracted driving awareness and regulation. In addition, NTSB issued recommendations to manufacturers of portable electronic devices (to develop lock-out mechanisms to prevent driver-distracting functions) and to Apple (banning the nonemergency use of portable electronic devices while driving).
Several NTSB recommendations previously issued to NHTSA, DOT, and Tesla were reclassified to "Open—Unacceptable Response". These included H-17-41 (recommendation to Tesla to incorporate system safeguards that limit the use of automated vehicle control systems to design conditions) and H-17-42 (recommendation to Tesla to more effectively sense the driver's level of engagement).
Kanagawa, Japan (April 29, 2018)
On April 29, 2018, a Tesla Model X operating on Autopilot struck and killed a pedestrian in Kanagawa, Japan, after the driver had fallen asleep. According to a lawsuit filed against Tesla in federal court (N.D. Cal.) in April 2020, the Tesla Model X accelerated from after the vehicle in front of it changed lanes; it then crashed into a van, motorcycles, and pedestrians in the far right lane of the expressway, killing a 44-year-old man on the road directing traffic. The original complaint claims the accident occurred due to flaws in Tesla's Autopilot system, such as inadequate monitoring to detect inattentive drivers and an inability to handle traffic situations "that drivers will almost always certainly encounter". In addition, the original complaint claimed this is the first pedestrian fatality to result from the use of Autopilot.
According to vehicle data logs, the driver of the Tesla had engaged autopilot at 2:11 pm (local time), shortly after entering the Tōmei Expressway. The driver's hands were detected on the wheel at 2:22 pm. At some point before 2:49 pm, the driver began to doze off, and at approximately 2:49 pm, the vehicle ahead of the Tesla signaled and moved one lane to the left to avoid the vehicles stopped in the far right lane of the expressway. While the Tesla was accelerating to resume its preset speed, it struck the man, killing him. He belonged to a motorcycle riding club which had stopped to render aid to a friend that had been involved in an earlier accident; he specifically had been standing apart from the main group while trying to redirect traffic away from that earlier accident.
The driver of the Tesla was convicted in a Japanese court of criminal negligence and sentenced to three years in prison (suspended for five years). The suit against Tesla in California was dismissed for forum non-conveniens by Judge Susan van Keulen in September 2020 after Tesla said it would accept a case brought in Japan. The plaintiffs appealed the dismissal to the Ninth Circuit Court of Appeals in February 2021.
Delray Beach, Florida, USA (March 1, 2019)
At approximately 6:17 am on the morning of March 1, 2019, a Tesla Model 3 driving southbound on US 441/SR 7 in Delray Beach, Florida struck a semi-trailer truck that was making a left-hand turn to northbound SR 7 out of a private driveway at Pero Family Farms; the Tesla underrode the trailer, and the force of the impact sheared off the greenhouse of the Model 3, resulting in the death of the Tesla driver. The driver of the Tesla had engaged Autopilot approximately 10 seconds before the collision and preliminary telemetry showed the vehicle did not detect the driver's hands on the wheel for the eight seconds immediately preceding the collision. The driver of the semi-trailer truck was not cited. Both the NHTSA and NTSB dispatched investigators to the scene.
According to telemetry recorded by the Tesla's restraint control module, the Tesla's cruise control was set to 12.3 seconds prior to the collision and Autopilot was engaged 9.9 seconds prior to the collision; at the moment of impact, the vehicle speed was . After the crash and underride, the Tesla continued southbound on SR 7 for approximately before coming to rest in the median between the northbound and southbound lanes. The car sustained extensive damage to the roof, windshield, and other surfaces above , the clearance under the trailer. Although the airbags did not deploy following the collision, the Tesla's driver remained restrained by his seatbelt; emergency response personnel were able to determine the driver's injuries were incompatible with life upon arriving at the scene.
In May 2019 the NTSB issued a preliminary report that determined that neither the driver of the Tesla or the Autopilot system executed evasive maneuvers. The circumstances of this crash were similar to the fatal underride crash of a Tesla Model S in 2016 near Williston, Florida; in its 2017 report detailing the investigation of that earlier crash, NTSB recommended that Autopilot be used only on limited-access roads (i.e., freeway), which Tesla did not implement.
The NTSB issued its final report in March 2020. The probable cause of the collision was the truck driver's failure to yield the right of way to the Tesla; however, the report also concluded that "an attentive car driver would have seen the truck in time to take evasive action. At no time before the crash did the car driver brake or initiate an evasive steering action. In addition, no driver-applied steering wheel torque was detected for 7.7 seconds before impact, indicating driver disengagement, likely due to overreliance on the Autopilot system." In addition, the NTSB concluded the operational design of the Tesla Autopilot system "permitted disengagement by the driver" and Tesla failed to "limit the use of the system to the conditions for which it was designed"; the NHTSA also failed to develop a method of verifying that manufacturers had safeguards in place to limit the use of ADAS to design conditions.
Key Largo, Florida, USA (April 25, 2019)
While driving on Card Sound Road, a 2019 Model S ran through a stop sign and flashing red stop light at the T-intersection with County Road 905, then struck a parked Chevrolet Tahoe which then hit two pedestrians, killing one. A New York Times article later confirmed Autopilot was engaged at the time of the accident. The driver of the Tesla, who was commuting to his home in Key Largo from his office in Boca Raton, told police at the scene that he was driving in "cruise"; he was allowed to leave without receiving a citation.
Fremont, California, USA (August 24, 2019)
In Fremont, California on I-880, while driving north of Stevenson Boulevard, a Ford Explorer pickup was rear-ended by a Tesla Model 3 using Autopilot, causing the pickup's driver to lose control. The pickup overturned and a 15-year-old passenger in the Ford, who was not seat-belted, was jettisoned from the pickup and killed. The deceased's parents sued Tesla and claimed in their filing that "Autopilot contains defects and failed to react to traffic conditions." In response, a lawyer for Tesla noted the police had cited the driver of the Tesla for inattention and operating the car at an unsafe speed. The incident has not been investigated by the NHTSA.
Cloverdale, Indiana, USA (December 29, 2019)
An eastbound Tesla Model 3 rear-ended a fire truck parked along I-70 near mile marker 38 in Putnam County, Indiana at approximately 8 am; both the driver and passenger in the Tesla, a married couple, were injured and taken to Terre Haute Regional Hospital, where the passenger later died from her injuries. The driver stated he regularly uses Autopilot mode, but could not recall if it was engaged when the Tesla hit the fire truck. The NHTSA announced it was investigating the crash on January 9 and later confirmed the use of Autopilot at the time of the crash.
Gardena, California, USA (December 29, 2019)
Shortly before 12:39 a.m. on December 29, 2019, a westbound Tesla Model S exited the freeway section of SR 91, failed to stop for a red light, and crashed into the driver's side of Honda Civic in Gardena, California, killing the driver and passenger in the Civic and injuring the Tesla driver and passenger. The freeway section of SR 91 ends just east of the intersection of Artesia Blvd and Vermont Ave, and continues as Artesia. The Tesla was proceeding west on Artesia against the red light when it struck the Civic, which was turning left from Vermont onto Artesia. The occupants of the Tesla were taken to the hospital with non life-threatening injuries.
The NHTSA initiated an investigation of the crash, which was considered unusual for a two-vehicle collision, and later confirmed in January 2022 that Autopilot was engaged during the crash. The Tesla driver was charged in October 2021 with vehicular manslaughter in the Los Angeles Superior Court. The families of the two killed also have filed separate civil lawsuits against the driver, for his negligence, and Tesla, for selling defective vehicles.
Arendal, Norway (May 29, 2020)
A truck driver parked a semi-trailer on May 29, 2020, partially off E18 near the Torsbuås tunnel outside Arendal; while fixing a strap that was securing the load, he was struck and killed by a passing Tesla. The Tesla's driver has been charged with negligent homicide. Early in the trial, an expert witness testified that the car's computer indicates Autopilot was engaged at the time of the incident. A forensic scientist said the killed man was less visible because he was in the shadow of the trailer. The driver said he had both hands on the wheel, and that he was vigilant. , the Accident Investigation Board Norway is still investigating.
The Woodlands, Texas, USA (April 17, 2021)
A Tesla Model S P100D crashed and caught fire in The Woodlands, Texas, a suburb of Houston, at 11:25 pm CDT on April 17, 2021. According to a police spokesperson, the vehicle was traveling at a high speed and after failing to negotiate a curve, departed the roadway, crashed into a tree, and burst into flames; the resulting fire took four hours and more than of water to extinguish. Two men were killed; one was found in the front passenger seat, and the other was in the back seat. The chief of The Woodlands fire department later clarified the fire had been extinguished within a few minutes of arriving on the scene, but could not perform final extinguishment due to the bodies and it being an investigation/crime scene, so a steady stream of water was required to keep the battery cool. Investigators from both NHTSA and NTSB have been dispatched to investigate.
The post-crash fire destroyed the car's onboard telemetry storage; although the restraint control module/event data recorder (EDR) was damaged by the fire, it is being evaluated at the NTSB's recorder laboratory. Based on data recovered from the EDR, the vehicle traveled approximately westbound on Hammock Dunes Place from the owner's residence before it departed the roadway. After leaving the road and driving over the mountable curb, the car hit a drainage culvert, raised manhole, and tree. The highest recorded speed in the five seconds leading up to the crash was . Security footage from the point of departure at the owner's residence showed that when the car left, one man was in the driver's seat, and the other was in the front passenger seat.
Because neither man was found behind the wheel of the Tesla, authorities initially were "100 percent certain that no one was in the driver seat driving that vehicle at the time of impact". Authorities also obtained statements from witnesses who said the two men wanted to test drive the vehicle without a driver. On a closed course, Consumer Reports demonstrated that Autopilot would stay engaged after a person climbed out of the driver's seat by using a weight to apply torque to the steering wheel and leaving the driver's seatbelt buckled.
However, a more detailed forensic investigation showed the driver's seat was likely occupied at the time of the crash, and that Autopilot was not engaged. In response to early assertions that Autopilot was involved, Elon Musk stated on Twitter that data logs indicated that Autopilot was not enabled, and the FSD package had not been purchased for that car. During an earnings call in April 2021, Tesla's vice president of vehicle engineering pushed back on the news coverage of the incident and added that Tesla representatives had studied the crash and reported the steering wheel was "deformed", which could indicate "someone was in the driver's seat at the time of the crash". The same Tesla officer noted a test car's adaptive cruise control had accelerated the car to only at the crash site. The NTSB tested an exemplar car at the site and found that Autosteer was not available on that part of Hammock Dunes. In an update published on October 21, the NTSB concluded that both the driver and front passenger seats were occupied at the time of the crash, based on the deformation of the steering wheel and data recovered from the car's event data recorder.
Fontana, California, USA (May 5, 2021)
At 2:35 A.M. PDT on May 5, 2021, a Tesla Model 3 crashed into an overturned tractor-trailer on the westbound Foothill Freeway (I-210) in Fontana, California. The driver of the Tesla was killed, and a man who had stopped to assist the driver of the truck was struck and injured by the Tesla. California Highway Patrol (CHP) officials announced on May 13 that Autopilot "was engaged" prior to the crash, but added a day later that "a final determination [has not been] made as to what driving mode the Tesla was in or if it was a contributing factor to the crash". The CHP and NHTSA are investigating the crash.
Queens, New York, USA (July 26, 2021)
While his vehicle was parked on the left shoulder of the westbound Long Island Expressway, just east of the College Point Boulevard exit in Flushing, Queens, as he was changing a flat tire, a man was hit and killed by a Tesla Model Y SUV. The NHTSA later determined Autopilot was active during the collision and sent a team to further investigate.
Non-fatal crashes
Culver City, California, USA (January 22, 2018)
On January 22, 2018, a 2014 Tesla Model S crashed into a fire truck parked on the side of the I-405 freeway in Culver City, California, while traveling at a speed exceeding and the driver survived with no injuries. The driver told the Culver City Fire Department that he was using Autopilot. The fire truck and a California Highway Patrol vehicle were parked diagonally across the left emergency lane and high-occupancy vehicle lane of the southbound 405, blocking off the scene of an earlier accident, with emergency lights flashing.
According to a post-accident interview, the driver stated he was drinking coffee, eating a bagel, and maintaining contact with the steering wheel while resting his hand on his knee. During the trip, which lasted 66 minutes, the Autopilot system was engaged for slightly more than 29 minutes; of the 29 minutes, hands were detected on the steering wheel for only 78 seconds in total. Hands were detected applying torque to the steering wheel for only 51 seconds over the nearly 14 minutes immediately preceding the crash. The Tesla had been following a lead vehicle in the high-occupancy vehicle lane at approximately ; when the lead vehicle moved to the right to avoid the fire truck, approximately three or four seconds prior to impact, the Tesla's traffic-aware cruise control system began to accelerate the Tesla to its preset speed of . When the impact occurred, the Tesla had accelerated to . The Autopilot system issued a forward collision warning half a second before the impact, but did not engage the automatic emergency braking (AEB) system, and the driver did not manually intervene by braking or steering. Because Autopilot requires agreement between the radar and visual cameras to initiate AEB, the system was challenged due to the specific scenario (where a lead vehicle detours around a stationary object) and the limited time available after the forward collision warning.
Several news outlets started reporting that Autopilot may not detect stationary vehicles at highway speeds and it cannot detect some objects. Raj Rajkumar, who studies autonomous driving systems at Carnegie Mellon University, believes the radars used for Autopilot are designed to detect moving objects, but are "not very good in detecting stationary objects". Both NTSB and NHTSA dispatched teams to investigate the crash. Hod Lipson, director of Columbia University's Creative Machines Lab, faulted the diffusion of responsibility concept: "If you give the same responsibility to two people, they each will feel safe to drop the ball. Nobody has to be 100%, and that's a dangerous thing."
In August 2019, the NTSB released its accident brief for the accident. HAB-19-07 concluded the driver of the Tesla was at fault due to "inattention and overreliance on the vehicle's advanced driver assistance system", but added the design of the Tesla Autopilot system "permitted the driver to disengage from the driving task". After the earlier crash in Williston, the NTSB issued a safety recommendation to "[d]evelop applications to more effectively sense the driver's level of engagement and alert the driver when engagement is lacking while automated vehicle control systems are in use." Among the manufacturers that the recommendation was issued to, only Tesla has failed to issue a response.
South Jordan, Utah, USA (May 11, 2018)
In the evening of May 11, 2018, a 2016 Tesla Model S with Autopilot engaged crashed into the rear of a fire truck that was stopped in the southbound lane at a red light in South Jordan, Utah, at the intersection of SR-154 and SR-151. The Tesla was moving at an estimated and did not appear to brake or attempt to avoid the impact, according to witnesses. The driver of the Tesla, who survived the impact with a broken foot, admitted she was looking at her phone before the crash. The NHTSA dispatched investigators to South Jordan. According to telemetry data recovered after the crash, the driver repeatedly did not touch the wheel, including during the 80 seconds immediately preceding the crash, and only touched the brake pedal "fractions of a second" before the crash. The driver was cited by police for "failure to keep proper lookout". The Tesla had slowed to to match a vehicle ahead of it, and after that vehicle changed lanes, accelerated to in the 3.5 seconds preceding the crash.
Tesla CEO Elon Musk criticized news coverage of the South Jordan crash, tweeting that "a Tesla crash resulting in a broken ankle is front page news and the ~40,000 people who died in US auto accidents alone in [the] past year get almost no coverage", additionally pointing out that "[a]n impact at that speed usually results in severe injury or death", but later conceding that Autopilot "certainly needs to be better & we work to improve it every day". In September 2018, the driver of the Tesla sued the manufacturer, alleging the safety features designed to "ensure the vehicle would stop on its own in the event of an obstacle being present in the path ... failed to engage as advertised." According to the driver, the Tesla failed to provide an audible or visual warning before the crash.
Moscow, Russia (August 10, 2019)
On the night of August 10, 2019, a Tesla Model 3 driving in the left-hand lane on the Moscow Ring Road in Moscow, Russia crashed into a parked tow truck with a corner protruding into the lane and subsequently burst into flames. According to the driver, the vehicle was traveling at the speed limit of with Autopilot activated; he also claimed his hands were on the wheel, but was not paying attention at the time of the crash. All occupants were able to exit the vehicle before it caught on fire; they were transported to the hospital. Injuries included a broken leg (driver) and bruises (his children).
The force of the collision was enough to push the tow truck forward into the central dividing wall, as recorded by a surveillance camera. Passersby also captured several videos of the fire and explosions after the accident, these videos also show the tow truck that the Tesla crashed into had been moved, suggesting the explosions of the Model 3 happened later.
Chiayi, Taiwan (June 1, 2020)
Traffic cameras captured the moment when a Tesla Model 3 slammed into an overturned cargo truck in Taiwan on June 1, 2020. The crash occurred at 6:40 am National Standard Time on the southbound National Freeway 1 in Chiayi, Taiwan, at approximately the south 268.4 km marker. The truck had been involved in a traffic accident at 6:35 am and overturned with its roof facing oncoming traffic; the driver of the truck got out to warn other cars away.
The driver of the Tesla was uninjured and told emergency responders that the car was in Autopilot mode, traveling at . The driver told authorities that he saw the truck and thought the Tesla would brake automatically upon encountering an obstacle; when he realized it would not, he manually applied the brakes, although it was too late to avoid the crash, which is apparently indicated on the video by a puff of white smoke coming from the tires.
Arlington Heights, Washington, USA (May 15, 2021)
A Tesla Model S crashed into a stopped Snohomish County, Washington sheriff's patrol car at 6:40 pm PDT on May 15, 2021, shortly after the deputy parked it while responding to an earlier crash which had broken a utility pole near the intersection of SR 530 and 103rd Ave NE in Arlington Heights, Washington. The patrol car was parked to partially block the roadway and protect the collision scene, and the patrol car's overhead emergency lights were activated. Neither the deputy nor the driver of the Tesla were injured. The driver of the Tesla assumed his car would slow and move over on its own because it was in "Auto-Pilot mode".
Brea, California, USA (November 3, 2021)
In November 2021, the NHTSA received its first complaint regarding a Tesla participating in FSD Beta. The incident was described as a "severe" crash involving a Tesla Model Y forcing itself into the incorrect lane.
See also
Advanced driver-assistance systems
Self-driving car
References
External links
Advanced driver assistance systems
Automotive technology tradenames
Automotive accessories
Automotive technologies
Autopilot |
4445105 | https://en.wikipedia.org/wiki/Daniel%20Murphy%20%28computer%20scientist%29 | Daniel Murphy (computer scientist) | Daniel L. Murphy is an American computer scientist notable for his involvement in the development of TECO (an early text editor and programming language), the operating systems TENEX and TOPS-20, and email.
Biography
Murphy attended MIT from 1961 and graduated in 1965.
In 1962 he created the text editor Text Editor and Corrector (TECO), later implemented on most of the PDP computers.
He also developed a simple software demand paging system in software for the PDP-1 while at MIT.
Murphy joined Bolt, Beranek and Newman BBN in 1965. There he used an SDS 940 computer running the Berkeley Timesharing System, which provided page memory management in hardware.
When the PDP-10 was announced, he was one of the architects of the TENEX operating system developed for the custom paging hardware designed at BBN. As part of the development of TENEX, Murphy and Ray Tomlinson wrote the original e-mail program.
In October 1972, he joined Digital Equipment Corporation where he first worked as a contractor porting TENEX to the KI10 model of the PDP-10 family. On January 2, 1973, he joined DEC as an employee, heading the team responsible for the development of the TOPS-20 operating system, an evolution of TENEX for the newer models of the PDP-10 family. TOPS-20 was first marketed in 1976 on the DECSYSTEM-20.
References
Further reading
Daniel G. Bobrow, Jerry D. Burchfiel, Daniel L. Murphy, Raymond S. Tomlinson, TENEX, A Paged Time Sharing System for the PDP-10 (Communications of the ACM, Vol. 15, pp. 135–143, March 1972)
External links
Daniel Murphy's Homepage
TECO, TENEX, and TOPS-20 Papers and Pictures
Living people
American computer scientists
Massachusetts Institute of Technology alumni
Year of birth missing (living people) |
20612909 | https://en.wikipedia.org/wiki/2001%20USC%20Trojans%20football%20team | 2001 USC Trojans football team | The 2001 USC Trojans football team represented the University of Southern California in the 2001 NCAA Division I-A football season. It was Pete Carroll's first year as head coach. The Kansas State Wildcats's victory on September 8 marked the last time a non-Pac-10 team defeated the Trojans in the Coliseum until November 27, 2010, when the Notre Dame Fighting Irish defeated the Trojans, 20–16.
Schedule
The Trojans finished the regular season with a 6–5 record.
Game summaries
San Jose State
Kansas State
Oregon
Stanford
Washington
Arizona State
Notre Dame
Arizona
Oregon State
California
UCLA
Las Vegas Bowl
Roster
Team players in the NFL
Marcell Allmond
Kevin Arbet
Chris Cash
Matt Cassel
Shaun Cody
Keary Colbert
Kori Dickerson
Justin Fargas
Lonnie Ford
Matt Grootegoed
Gregg Guenther
Alex Holmes
Norm Katnik
Kareem Kelly
David Kirtman
Jason Leach
Matt Leinart
Malaefou MacKenzie
Grant Mattos
Sultan McCullough
Ryan Nielsen
Carson Palmer
Mike Patterson
Troy Polamalu
Kris Richard
Bernard Riley
Jacob Rogers
Antuan Simmons
Kenechi Udeze
Lenny Vandermade
John Walker
Lee Webb
References
USC
USC Trojans football seasons
USC Trojans football |
2979009 | https://en.wikipedia.org/wiki/Graphic%20art%20software | Graphic art software | Graphic art software is a subclass of application software used for graphic design, multimedia development, stylized image development, technical illustration, general image editing, or simply to access graphic files. Art software uses either raster or vector graphic reading and editing methods to create, edit, and view art.
Many artists and other creative professionals today use personal computers rather than traditional media. Using graphic art software may be more efficient than rendering using traditional media by requiring less hand–eye coordination, requiring less mental imaging skill, and utilizing the computer's quicker (sometimes more accurate) automated rendering functions to create images. However, advanced level computer styles, effects and editing methods may require a steeper learning curve of computer technical skills than what was required to learn traditional hand rendering and mental imaging skills. The potential of the software to enhance or hinder creativity may depend on the intuitiveness of the interface.
Specialized software
Most art software includes common functions, creation tools, editing tools, filters, and automated rendering modes. Many, however, are designed to enhance a specialized skill or technique. Specialized software packages may be discontinued for various reasons such as lack of appreciation for the result, lack of expertise and training for the product, or simply not worth the time and money investment, but most likely due to obsolescence compared to newer methods or integration as a feature of newer more complete software packages.
Graphic design software
Graphic design professionals favor general image editing software and page layout software commonly referred to as desktop publishing software. Graphic designers that are also image developers or multimedia developers may use a combination of page layout software with the following:
Multimedia development software
Multimedia development professionals favor software with audio, motion and interactivity such as software for creating and editing hypermedia, electronic presentations (more specifically slide presentations), computer simulations and games.
Stylized image development software
Image development professionals may use general graphic editors or may prefer more specialized software for rending or capturing images with style. Although images can be created from scratch with most art software, specialized software applications or advanced features of generalized applications are used for more accurate visual effects. These visual effects include:
Traditional medium effects
Vector editors are ideal for solid crisp lines seen in line art, poster, woodcut ink effects, and mosaic effects.
Some generalized image editors, such as Adobe Photoshop are used for digital painting (representing real brush and canvas textures such as watercolor or burlap canvas) or handicraft textures such as mosaic or stained glass. However, unlike Adobe Photoshop, which was originally designed for photo editing, software such as Corel Painter and Photo-Paint were originally designed for rendering with digital painting effects and continue to evolve with more emphasis on hand-rendering styles that don't appear computer generated.
Photorealistic effects
Unlike traditional medium effects, photorealistic effects create the illusion of a photographed image. Specialized software may contain 3D modeling and ray tracing features to make images appear photographed. Some 3D software is for general 3D object modeling, whereas other 3D software is more specialized, such as Poser for characters or Bryce for scenery. Software such as Adobe Photoshop may be used to create 3D effects from 2D (flat) images instead of 3D models. AddDepth is a discontinued software for extruding 2D shapes into 3D images with the option of beveled effects. MetaCreations Detailer and Painter 3D are discontinued software applications specifically for painting texture maps on 3D Models.
Hyperrealistic effects
Specialized software may be used to combine traditional medium effects and photorealistic effects. 3-D modeling software may be exclusively for, include features for, or include the option of 3rd party plugins for rendering 3-D models with 2-D effects (e.g. cartoons, illustrations) for hyperrealistic effects. Other 2-D image editing software may be used to trace photographs or rotoscope animations from film. This allows artists to rapidly apply unique styles to what would be purely photorealistic images from computer generated imagery from 3-D models or photographs. Some styles of hyperrealism may require motion visual effects (e.g. geometrically accurate rotation, accurate kinetics, simulated organic growth, lifelike motion constraints) to notice the realism of the imagery. Software may be used to bridge the gap between the imagination and the laws of physics.
Technical graphic software
Technical professionals and technical illustrators may use technical graphic software that might allow for stylized effects with more emphasis on clarity and accuracy and little or no emphasis on creative expression and aesthetics. For this reason, the results are seldom referred to as "art." For designing or technical illustration of synthetic physical objects, the software is usually referred to as CAD or CADD, Computer-Aided Design and Drafting. This software allows for more precise handling of measurements and mathematical calculations, some of which simulate physics to conduct virtual testing of the models. Aside from physical objects, technical graphic software may include software for visualizing concepts, manually representing scientific data, visualizing algorithms, visual instructions, and navigational aids in the form of information graphics. Specialized software for concept maps may be used for both technical purposes and non-technical conceptualizing, which may or may not be considered technical illustration.
Specialized graphic format handling
This may include software for handling specialized graphic file formats such as Fontographer software, which is dedicated to creating and editing computer fonts. Some general image editing software has unique image file handling features as well. Vector graphic editors handle vector graphic files and are able to load PostScript files natively. Some tools enable professional photographers to use nondestructive image processing for editing digital photography without permanently changing or duplicating the original, using the Raw image format. Other special handling software includes software for capturing images such as 2D scanning software, 3D scanning software and screen-capturing, or software for specialized graphic format processing such as raster image processing and file format conversion. Some tools may reduce the file size of graphics for web performance optimization while maintaining the image quality as best as possible.
Lists of Software
List of raster graphics editors
List of vector graphics editors
List of computer-aided design editors
List of information graphics software
List of concept- and mind-mapping software (description and list)
3D computer graphics software (description and list)
Presentation software (description and list)
Desktop publishing (description and list)
List of media players (viewing access only)
Comparison of media players
See also
Computer art
Computer generated imagery
Computer graphics
Digital artist
Graphics programs
Raster graphics editor
Vector graphics editor
References
Computer art
Graphics software |
15352742 | https://en.wikipedia.org/wiki/BlindWrite | BlindWrite | BlindWrite, the successor to BlindRead, is a computer program that writes to recordable CDs. The Blindread software, which reads CDs and writes CD image files, has been discontinued as a separately released product, but BlindRead's code is included in the newer BlindWrite suite of software that also code to control CD writers. BlindWrite's most distinctive feature touted over other pre-existing CD writing software was to use the CD images BlindRead made. BlindRead's main features were the use of Sub codes and its willingness to be "blind" to errors and continue the copying process, rather than quitting when encountering reading errors often caused from hardware trouble or disk damage (such as scratches or even intentional "damage" created by the manufacturer as a form of copy protection) that would often cause many other software packages to terminate the reading process.
Support for "Sub codes", also known as "Sub Channel Data", distinguished the files created with BlindRead from other software that created CD image files. The "Sub code" data could be written in *.SUB files added to either of two already-existing popular formats (ISO image and Cue sheet), or the Sub code data could be written in *.BWS files in BlindRead's native format that consisted of writing a CD image in a group of three files. This format, which has become known more widely as the BlindWrite native format (now that BlindRead is not maintained and distributed as a separate product anymore), has two or three files for a proper image. The *.BWS Sub code image is optional and may or may not get created depending on an option selected in the software that created the image. The *.BWT control file is another relatively small file (that may be a very small number of kilobytes when part of a multi-file CD image for a full-sized 650MB CD) that is typically the file referred to in the user interfaces of software that supports this format and chooses just one file extension per CD image for the purposes of filename selection. (This includes VSO's software and Daemon Tools.) The *.BWI file is the large image file.
BlindWrite version 5 (released in Q4 2003) has been found using new native files that come in pairs. The files use the extensions *.B5T (a relatively small file) and *.B5I image files.
BlindWrite version 6 (released in Q3 2006) also has been found using new native files that come in pairs. The files use the extensions *.B6T (a relatively small file) and *.B6I image files. This has also added support for Blu-Ray discs.
The BlindRead format has been supported by software other than BlindRead and BlindWrite, including Daemon Tools, which was some of the earliest supporting software (capable of using *.BWT files even before Blindwrite was released). The format was embraced by enthusiasts who were interested in making "more perfect" copies of CDs by using a process that used the Sub code data that other computer software did not support (and so ignored, instead of used).
Awards
BlindWrite won the TopTenReviews Bronze Award in 2009.
External links
VSO BlindWrite review
b5i2iso - open-source tool to convert BlindWrite .B5I images into ISO images
References
Windows-only software
Optical disc authoring software |
22357213 | https://en.wikipedia.org/wiki/Machinarium | Machinarium | Machinarium is a puzzle point-and-click adventure game developed by Amanita Design. It was released on 16 October 2009 for Microsoft Windows, OS X, Linux, on 8 September 2011 for iPad 2 on the App Store, on 21 November 2011 for BlackBerry PlayBook, on 10 May 2012 for Android, on 6 September 2012 on PlayStation 3's PlayStation Network in Europe, on 9 October 2012 in North America and on 18 October 2012 in Asia, and was also released for PlayStation Vita on 26 March 2013 in North America, on 1 May 2013 in Europe and on 7 May 2013 in Asia. Demos for Windows, Mac and Linux were made available on 30 September 2009. A future release for the Wii's WiiWare service was cancelled due to WiiWare's 40MB limit.
Microsoft Windows, OS X, Linux and Android versions of this game were released along with Humble Indie Bundle for Android 4 on 8 November 2012, to customers who paid over the average price. The Windows Phone version was released on 22 March 2014.
In 2017, the developer released a Definite Edition of the game that is based on a DirectX engine instead of Adobe Flash and can be played in fullscreen mode. The Xbox One version of the game was released on 16 April 2020.
Gameplay
The goal of Machinarium is to solve a series of puzzles and brain teasers. The puzzles are linked together by an overworld consisting of a traditional "point and click" adventure story. The overworld's most radical departure is that only objects within the player character's reach can be clicked on.
Machinarium is notable in that it contains no dialogue, spoken or written, and apart from a few tutorial prompts on the first screen, is devoid of understandable language entirely. The game instead uses a system of animated thought bubbles. Easter egg backstory scenes in the same format can only be revealed by idling in certain areas.
The game employs a two-tier hint system. Once per level, the player can receive a hint, which becomes increasingly vague as the game progresses. Machinarium also comes with a walkthrough, that can be accessed at any time by playing a minigame. As with dialogue, the walkthrough is not in written or spoken form, but instead a series of sketches describing the puzzle at hand and its solution. However, the walkthrough only reveals what must be done in that area, and not how that puzzle relates to the game chronology.
Plot summary
Machinarium opens with an overview of the eponymous city as a disposal flier launches from the pinnacle of its highest tower. The player character, a robot called Josef (named after Josef Čapek, the creator of the word "robot" and brother to Karel Čapek) is dumped on a scrapheap, where he re-assembles himself and sets off for the city. Entering the city, he discovers a plot by the Black Cap Brotherhood, the three criminal antagonists, to blow up the city's tower. He is himself then discovered and locked up. After breaking out of prison, Josef aids the citizens of the city, as he discovers the mischief which the Brotherhood has been working. Shortly after flooding the Brotherhood's room (leaving them helpless), Josef locates his girlfriend Berta, who has been locked up and forced to cook. Unable to free her, he works his way to the top of the tower. After he foils the Black Cap Brotherhood's plot by disarming the bomb taped to the tower, Josef reaches the highest room, in which the story began. A huge-headed robot, the "head" of the city, sits in the middle of the room, incapacitated and gibbering. Josef recalls how the three of them lived happily until the Black Cap Brotherhood zapped this friend, leaving him disabled, and kidnapped Berta. When a garbage sucker arrived to dispose of the Black Cap thug, it apprehended Josef instead. After this revelation, Josef restores his friend to sanity, dumps the Brotherhood down a drain, and frees Berta. The two of them climb back to the tower, wave goodbye to their friend, and fly off into the sunset. In the final closing scene, their vehicle suffers a collision and falls, and they are seen being carried away separately by two fliers.
Development
Machinarium was developed over a period of three years, by seven Czech developers, who financed the project with their own savings. The marketing budget for the game was $1,000.
The game was in development for the Xbox 360 platform for a period of six months; however, Microsoft, whom the developers had approached to publish the title on Xbox Live Arcade, ultimately decided not to do so. Microsoft does not allow games to be released on Xbox Live Arcade without a publisher attached to the title, and the developers were reluctant to approach a third party to publish the game, as this would mean that profits for the developers from sales over Xbox Live Arcade would be greatly reduced. Subsequently, Amanita Design approached Sony, whose policies do allow for self-publishing on the PlayStation Network platform, and have submitted the game to them for approval, in order to release the game on the PlayStation Network.
In 2011 it was mentioned that a sequel to Machinarium "is possible" but something the team has yet to fully consider. "We don't look far through the future", said Jakub Dvorský.
Reception
Critical response
Machinarium was well-received on release; on the critic aggregate sites GameRankings and Metacritic, the game has an average score of 85% and 85/100, respectively.
In 2008, it won the Aesthetics award at IndieCade (the International Festival of Independent Games). It won the Excellence in Visual Art award at the 12th Annual Independent Games Festival and the Best Soundtrack award from PC Gamer in 2009. It was nominated for an Outstanding Achievement in Art Direction award by the Academy of Interactive Arts & Sciences and a Milthon award in the 'Best Indie Game' category at the Paris Game Festival.
Gaming site Kotaku named it a runner-up for "PC Game of the Year 2009" alongside Torchlight, losing to winner Empire: Total War. Gamasutra, Gamerview and the Turkish site of Tom's Hardware all selected Machinarium as the 'Best Indie Game' of 2009. AceGamez named Machinarium the 'Best Traditional Adventure Game' of 2009.
In 2011, Adventure Gamers named Machinarium the 17th-best adventure game ever released.
Pirate amnesty
On 5 August 2010, Amanita Design announced that according to their estimates, only 5–15% of Machinarium players had actually paid for the game. In an effort to increase sales, the game's price was lowered from the regular $20 to $5 until 12 August as an incentive for pirates to purchase the game legally. The campaign was later extended until 16 August, resulting in 20,000 game copies sold over the whole amnesty period.
Sales
Machinarium has sold over 4 million units from which 49% is for PC, 44% mobile devices and 7% consoles.
Editions
Machinarium was released in several physical and digital formats.
Physical:
Machinarium Collector's Edition (United Kingdom) – includes a DVD with the game, a soundtrack in digital format, a CD with the soundtrack, a printed walkthrough, an A3 poster and concept art. Systems: Windows, Mac (no Linux).
Mашинариум (Machinarium) (Russian Edition) – includes a DVD with the game and a CD with the soundtrack. Systems: Windows, Mac (no Linux).
Machinarium (German Edition) – includes a game disc, a CD with the soundtrack, Samorost 2 and a poster. Systems: Windows, Mac (no Linux).
Machinarium (French Edition) – includes a game disc, a CD with the soundtrack, Samorost 2 and a poster. Systems: Windows, Mac (no Linux).
Machinarium (Italian Edition) – includes a game disc. Systems: Windows (no Mac, no Linux).
Machinarium (Czech Edition) – includes a game disc and an MP3 soundtrack. Systems: Windows, Mac (no Linux).
Machinarium (Slovak Edition) – includes a game disc and an MP3 soundtrack. Systems: Windows, Mac (no Linux).
Machinarium (Polish Edition) – includes a DVD with the game, a soundtrack, an EP album in FLAC/MP3 digital formats and a poster. Systems: Windows, Mac. Packed in an "exclusive", metal box.
Digital:
Amanita Design Store – includes Win, Mac and Linux versions of the game and a soundtrack in FLAC/MP3 format. It was released on 16 October 2009.
Steam – includes Win and Mac versions of the game. It was released on 16 October 2009.
GOG.com – "Machinarium: Collector's Edition" includes Win, Mac and Linux versions of the game, a soundtrack, and various artwork and supplemental materials.
Desura – includes Win, Mac and Linux versions of the game and a soundtrack. It was released on 19 December 2010 in conjunction with Humble Indie Bundle 2.
Impulse – includes a Win version of the game.
Direct2Drive – includes a Win version of the game.
GamersGate – includes Win and Mac versions of the game.
Mac App Store – includes a Mac version of the game. It was released on 18 March 2011.
Playism (Japan) – includes Win and Mac versions of the game. It was released on 11 May 2011.
Consoles:
PlayStation 3 (PSN) (as Ultimate Version) – released on 6 September 2012 in Europe, 9 October 2012 in North America and 18 October 2012 in Asia.
Cancelled:
Xbox 360 (XBLA) – the game was in development for the Xbox 360 platform for a period of six months; however, Microsoft, whom the developers had approached to publish the title on Xbox Live Arcade, ultimately decided not to do so.
Wii (WiiWare) – the game was being scheduled for a release for the Nintendo Wii's WiiWare service, but , it has been cancelled due to WiiWare's 40MB limit.
Handheld game consoles:
PlayStation Vita – released on 26 March 2013 in North America, on 1 May 2013 in Europe and on 7 May 2013 in Asia.
Tablets:
iPad 2 – released on 8 September 2011.
BlackBerry PlayBook – released on 21 November 2011.
Android – released on 10 May 2012.
iPad 1 – released as v2.0 on 16 October 2013.
Natively runs on another platforms:
HP TouchPad – natively runs on the Flash-based PC version.
Other media
Josef has been featured as a playable character in the platform game Super Meat Boy, and makes a cameo in the puzzle game ilomilo.
Josef was also included in the Video Game Character alphabet, created by Fabian Gonzalez.
Josef also made a cameo in Amanita Design's second full-scale game, Botanicula.
Machinarium and its soundtrack inspired the poem The Machingeon, written by Andrew Galan and published in Establishment Magazine Issue 1.
Josef is included in the "Good Friends" Character DLC pack for Runner2.
Josef is featured as an assist character in the upcoming fighting game Fraymakers.
In 2019, Tomáš Dvořák, the soundtrack composer for Machinarium better known under his pseudonym Floex, was performing live shows with a robotic version of Josef who plays various beat instruments. The performance itself consists of reworks of the soundtrack.
See also
Video games in the Czech Republic
Art game
Gobliiins
The Humble Indie Bundle
References
External links
Official site with online Flash demo
2009 video games
Android (operating system) games
Adventure games
Amanita Design games
Art games
BlackBerry PlayBook games
Cancelled Xbox 360 games
Cancelled Wii games
Flash games
Independent Games Festival winners
Indie video games
Linux games
MacOS games
PlayStation 3 games
PlayStation 4 games
PlayStation Network games
PlayStation Vita games
Point-and-click adventure games
Puzzle video games
Single-player video games
Steampunk video games
Video games developed in the Czech Republic
Windows games
Video games about robots
Nintendo Switch games
IOS games |
23871184 | https://en.wikipedia.org/wiki/Find%20My%20iPhone | Find My iPhone | Find My iPhone (known as Find My Mac in macOS) was an app and service provided by Apple Inc. that allowed remote of iOS devices, Mac computers, Apple Watch, and AirPods. Both Find My iPhone and Find My Friends were combined into the app Find My in iOS 13 and iPadOS 13 in 2019.
The service itself was integrated into iOS and macOS, while enabled devices could be tracked using either an iOS app or the iCloud website. On iOS 8 and older, the app could be downloaded from the App Store free of charge. Starting with iOS 9, the app has been bundled with the operating system.
For the app to work, both the tracker device and the device being located had to be supported devices with the Find My iPhone app installed and Location Services turned on, and both must have been connected to the same iCloud account.
Features
Find My iPhone allowed users to locate their iOS devices using either the iOS app or iCloud on a computer (such as a desktop). In addition to locating a device, the service provided three additional options:
Play sound – makes the device play a sound at maximum volume, makes flashing on screen even if it is muted. This feature is useful if the device has been mislaid, and is equivalent to finding a mislaid phone by calling it using another phone.
Lost mode (iOS 6 or later) – flags the device as lost or stolen, allowing the user to lock it with a passcode. If the device is an iPhone and someone finds the device, they can call the user directly on the device.
Erase iPhone – completely erases all content and settings, which is useful if the device contains sensitive information, but the device cannot be located after this action is performed. Starting with iOS 7 or later, after the erase is complete, the message can still be displayed and the device will be activation locked. This makes it hard for someone to use or sell the device. An Apple ID password will be required to turn off Find My iPhone, sign out of iCloud, erase the device, or reactivate a device after a remote wipe.
The update with iOS 6 added the ability to check the device's battery level.
Since the release of iOS 7, users complained about the link between GPS, WiFi, and the app itself. Some handset owners had noted the app enables and disables itself when passing between cellular protocol bandwidths.
Requirements
For the Find My iPhone app to work, the user must have to set up an iCloud account to create the user's Apple ID. Each device to be tracked must have been linked to the same Apple ID, and the Location Services feature must also have been be turned on on each device to be tracked. Location was determined using GPS in the iOS device when Location Services are turned on, but the location of the iOS device was only approximate. To turn Location Services on, users needed to go to Settings > Privacy > Location Services, then selecting the Find My iPhone app in the list and selecting the "While Using the App" option. To deactivate the app, selecting the "Never" option instead. The user could also track the device by signing in to iCloud.com.
, Find My iPhone was supported on iPhone, iPad, iPod Touch, and Mac computers running OS X 10.7.5 "Lion" or later. In addition to a compatible device, a free iCloud account was required to use Find My iPhone. Users also can track their Find My iPhone enabled devices through iCloud on Windows, but cannot use it the other way around to track their PC.
History
Find My iPhone was released initially as an app in June 2010 for users of MobileMe. In November 2010 with iOS 4.2, Find My iPhone was available for free for such devices. With the release of iCloud in October 2011, the service became free for all iCloud users. Also, the service was made available as "Find My Mac" for Mac computers running OS X 10.7.2 "Lion" or later using iCloud. With the release of MacOS Catalina, the Find My Mac app was combined with the Find My Friends app to create the new Find My app.
Incidents
In July 2011, a Zurich woman had her backpack including an iPhone stolen. Police were able to recover it the same day after matching the GPS location with the address of a police-known petty criminal.
In November 2011, police in Los Angeles, California were able to find an armed robbery suspect by using Find My iPhone on the victim's stolen iPhone.
On September 14, 2012, two suspects were arrested in Atlanta, Georgia for robbing five women at gunpoint. Police were able to locate the suspects by using Find My iPhone to find one of the stolen iPhones.
Since early 2011, some Sprint users who used the app to find their lost device were sent to a 59-year-old man's house in Las Vegas, Nevada. Multiple people insisted that he had their device and the police were called multiple times. The man eventually had to put up a sign by his door saying that he had "no lost cell phones".
On January 16, 2015, a Langley, British Columbia woman had her iMac stolen during a break-in at her home. Nearly a month later, she received a notification on her phone then contacted police who found and arrested two men just as they were attempting to escape out a back door.
In November 2016, the husband of Sherri Papini located her cell phone and ear buds on a street corner, where his wife was kidnapped.
See also
AirTags
Find My Device
Find My Friends
iCloud
MobileMe
References
External links
IOS software
IOS
Discontinued software
Internet geolocation |
62646499 | https://en.wikipedia.org/wiki/2019%E2%80%9320%20Little%20Rock%20Trojans%20men%27s%20basketball%20team | 2019–20 Little Rock Trojans men's basketball team | The 2019–20 Little Rock Trojans men's basketball team represented the University of Arkansas at Little Rock in the 2019–20 NCAA Division I men's basketball season. The Trojans, were led by 2nd-year head coach Darrell Walker, play their home games at the Jack Stephens Center in Little Rock, Arkansas as members of the Sun Belt Conference. They finished the season 21–10, 15–5 in Sun Belt play to win the Sun Belt regular season championship. They were the No. 1 seed in the Sun Belt Tournament, however, the tournament was cancelled amid the COVID-19 pandemic. Due to the Sun Belt Tournament cancellation, they were awarded the Sun Belt's automatic bid to the NCAA Tournament. However, the NCAA Tournament was also cancelled due to the same outbreak.
Previous season
The Trojans finished the 2018–19 season 10–21, 5–13 in Sun Belt play to finish in a tie for last place. They failed to qualify for the Sun Belt Tournament.
Roster
Schedule and results
|-
!colspan=12 style=| Exhibition
|-
!colspan=12 style=| Non-conference regular season
|-
!colspan=9 style=| Sun Belt Conference regular season
|-
!colspan=12 style=| Sun Belt Tournament
|- style="background:#bbbbbb"
| style="text-align:center"|Mar 14, 202011:30 am, ESPN+
| style="text-align:center"| (1)
| vs. (5) Georgia SouthernSemifinals
| colspan=2 rowspan=1 style="text-align:center"|Cancelled due to the COVID-19 pandemic
| style="text-align:center"|Smoothie King CenterNew Orleans, LA
|-
Source
References
Little Rock Trojans men's basketball seasons
Little Rock Trojans
Little Rock Trojans men's basketball
Little Rock Trojans men's basketball |
37508279 | https://en.wikipedia.org/wiki/Build%20%28conference%29 | Build (conference) | Microsoft Build (often stylised as ) is an annual conference event held by Microsoft, aimed at software engineers and web developers using Windows, Microsoft Azure and other Microsoft technologies. First held in 2011, it serves as a successor for Microsoft's previous developer events, the Professional Developers Conference (an infrequent event which covered development of software for the Windows operating system) and MIX (which covered web development centering on Microsoft technology such as Silverlight and ASP.net). The attendee price was (US)$2,195 in 2016, up from $2,095 in 2015. It sold out quickly, within one minute of the registration site opening in 2016.
Format
The event has been held at a large convention center, or purpose-built meeting space on the Microsoft Campus. The Keynote on the first day has been led by the Microsoft CEO addressing the press and developers. It has been the place to announce the general technology milestones for developers. There are breakout sessions conducted by engineers and program managers, most often Microsoft employees representing their particular initiatives. The keynote on the second day often includes deeper dives into technology. Thousands of developers and technologists from all over the world attend.
Events
2011
Build 2011 was held from September 13 to September 16, 2011 in Anaheim, California. The conference heavily focused on Windows 8, Windows Server 2012 and Visual Studio 2012; their Developer Preview versions were also released during the conference. Attendees also received a Samsung tablet shipping with the Windows 8 "Developer Preview" build.
2012
Held on Microsoft's campus in Redmond from October 30 to November 2, 2012, the 2012 edition of Build focused on the recently released Windows 8, along with Windows Azure and Windows Phone 8. Attendees received a Surface RT tablet with Touch Cover, a Nokia Lumia 920 smartphone, and 100GB of free SkyDrive storage.
2013
Build 2013 was held from June 26 to June 28, 2013 at the Moscone Center (North and South) in San Francisco. The conference was primarily used to unveil the Windows 8.1 update for Windows 8. Each attendee received a Surface Pro, Acer Iconia W3 (the first 8-inch Windows 8 tablet) with a Bluetooth keyboard, one year of Adobe Creative Cloud and 100GB of free SkyDrive storage.
2014
Build 2014 was held at the Moscone Center (West) in San Francisco from April 2 to April 4, 2014. Build attendees received a free Xbox One and a $500 Microsoft Store gift card.
Highlights:
Windows Display Driver Model 2.0 and DirectX 12
Microsoft Cortana
Windows Phone 8.1
Windows 8.1 Spring Update
Windows free on all devices with a screen size of 9" or less and on IoT
Bing Knowledge widget and app linking
.NET Native (Announcement, Product Page)
.NET Compiler Platform (Roslyn)
Visual Studio 2013 Update 2 RC
Team Foundation Server 2013 Update 2 RTM
TypeScript 1.0
.NET Foundation
2015
Build 2015 was held at the Moscone Center (West) in San Francisco from April 29 to May 1, 2015. Registration fee is $2095, and opened at 9:00am PST on Thursday, January 22 and "sold out" in under an hour with an unspecified number of attendees. Build attendees received a free HP Spectre x360 ultrabook.
Highlights:
Windows 10
Windows 10 Mobile
HoloLens and Windows Holographic
Windows Server 2016
Microsoft Exchange Server 2016
Visual Studio 2015
Visual Studio Code
2016
Build 2016 was held at the Moscone Center in San Francisco from March 30 to April 1, 2016. The price was $2195, an increase of $100 compared to the previous year. The conference was sold out in 1 minute. Unlike previous years, there were no hardware gifts for attendees.
Highlights:
Windows Subsystem for Linux
Cortana chatbot on Skype
"Power of the Pen and the PC"
.NET Standard Library
ASP.NET Core
Browser extension support for Edge
Windows 10 Anniversary Update
Xamarin
Free for individuals, open source projects, academic research, education, and small professional teams.
Remoted iOS Simulator for Windows
2017
The 2017 Build conference took place at the Washington State Convention Center in Downtown Seattle, Washington from May 10 to May 12, 2017. It had been at Moscone Center for the previous four years. However, Moscone center was undergoing renovations from April through August 2017. The Seattle location brought the conference close to the Microsoft headquarters in Redmond, Washington. The price remained at $2195 for the 2017 conference. There were no devices given away at this conference to attendees.
Highlights:
Azure Cosmos DB
Visual Studio for Mac
WSL: Fedora and SUSE support
Xamarin Live Player
Windows 10 Fall Creators Update
Microsoft Fluent Design System
2018
The 2018 Build conference took place at the Washington State Convention Center in Downtown Seattle, Washington May 7 to May 9, 2018. The price has increased $300 to $2495 for the 2018 conference. The conference was preceded by the Windows Developer Awards 2018 ceremony.
Highlights:
.NET
.NET Core 3
ML.NET
Azure
Azure CDN
Azure Confidential Computing
Azure Database Migration Service
Azure Maps
Microsoft 365
Microsoft Store: increased developer revenue share (95%; Non-Game App via deeplink only)
Visual Studio
App Center
IntelliCode
Live Share
Windows 10 Redstone 5
Cloud Clipboard
Microsoft Notepad: Unix/Linux EOL support
Xamarin
Hyper-V Android Emulator
Automatic iOS Device Provisioning
Xamarin.Essentials
Xamarin.Forms 3.0
2019
The 2019 Build conference took place at the Washington State Convention Center in Downtown Seattle, Washington from May 6 to May 8, 2019 plus optional post-event learning activities on next two days. The price decreased $100 to $2395 for the 2019 conference. Registration started on February 27.
Highlights:
.NET 5: next multi-platform .NET Core
Azure: Azure SQL Database Edge
Fluid Framework
Visual Studio: IntelliCode
Visual Studio Code: Remote Development Extension Pack
Visual Studio Online
Windows Subsystem for Linux 2
Windows Terminal: cmd.exe, PowerShell, and WSL in tabs
2020
Microsoft announced the dates for Build, and their other large conferences on September 16, 2019, with pricing set at $2395. The physical 2020 Build conference, scheduled to take place in downtown Seattle, Washington from May 19 to May 21, 2020, was initially cancelled due the coronavirus pandemic. On April 20, 2020, Microsoft opened sign-ups for a replacement, virtual event, held the same date as the originally intended physical event; the virtual event was free of charge.
Highlights:
.NET Multi-platform App UI (.NET MAUI)
Windows Subsystem for Linux (WSL)
GPU support (for CUDA and DirectML)
GUI support (WSLg)
Windows Package Manager
2021
The 2021 conference, once again a free-of-charge virtual event, was held on May 25 to 27, 2021.
Highlights:
.NET 6
Azure
Azure AI Services
Azure Bot Service
Azure Metrics Advisor
Azure Video Analyzer
Azure Cognitive Services
Power Fx
PyTorch Enterprise on Microsoft Azure
Microsoft Build of OpenJDK
Windows Subsystem for Linux GUI (WSLg)
Attendee Party Venues
2011: The Grove
2012: Seattle Armory
2013: Pier 48
2014: AMC Metreon
2015: AMC Metreon
2016: Block Party Yerba Ln
2017: CenturyLink Field
2018: Museum of Pop Culture / Chihuly Garden and Glass
2019: CenturyLink Field
See also
Microsoft
WWDC
Google I/O
Developer conference
References
External links
Build Event on MSDN Channel 9
Build Sessions on Microsoft.com
DevBlogs on Microsoft.com
Microsoft conferences |
5048060 | https://en.wikipedia.org/wiki/Self-Protecting%20Digital%20Content | Self-Protecting Digital Content | Self Protecting Digital Content (SPDC), is a copy protection (digital rights management) architecture which allows restriction of access to, and copying of, the next generation of optical discs and streaming/downloadable content.
Overview
Designed by Cryptography Research, Inc. of San Francisco, SPDC executes code from the encrypted content on the DVD player, enabling the content providers to change DRM systems in case an existing system is compromised. It adds functionality to make the system "dynamic", as opposed to "static" systems in which the system and keys for encryption and decryption do not change, thus enabling one compromised key to decode all content released using that encryption system. "Dynamic" systems attempt to make future content released immune to existing methods of circumvention.
Playback method
If a method of playback used in previously released content is revealed to have a weakness, either by review or because it has already been exploited, code embedded into content released in the future will change the method, and any attackers will have to start over and attack it again.
Targeting compromised players
If a certain model of players are compromised, code specific to the model can be activated to verify that the particular player has not been compromised. The player can be "fingerprinted" if found to be compromised and the information can be used later.
Forensic marking
Code inserted into content can add information to the output that specifically identifies the player, and in a large-scale distribution of the content, can be used to trace the player. This may include the fingerprint of a specific player.
Weaknesses
If an entire class of players is compromised, it is infeasible to revoke the ability to use the content on the entire class because many customers may have purchased players in the class. A fingerprint may be used to try to work around this limitation, but an attacker with access to multiple sources of video may "scrub" the fingerprint, removing the fingerprint entirely or rendering it useless at the very least.
Because dynamic execution requires a virtual environment, it may be possible to recreate an execution environment on a general purpose computer that feeds the executing code whatever an attacker wants the code to see in terms of digital fingerprints and memory footprints. This allows players running on general purpose computers to emulate any specific model of player, potentially by simply downloading firmware updates for the players being emulated. Once the emulated execution environment has decrypted the content, it can then be stored in decrypted form.
Because the content encryption scheme (such as BD+) is separate from the transport encryption scheme (such as HDCP), digital content is transferred inside the player between circuits in unencrypted form. It is possible to extract digital data directly from circuit traces inside a licensed and legal player before that content has been re-encrypted for transport across the wire, allowing a modified player to be used as a decryption device for protected content. Only one such device must exist for the content to be widely distributed over digital networks such as the Internet.
The final weakness of all DRM schemes for noninteractive works is the ultimate decryption for display to end-users. The content can at that time be re-encoded as a digital file. The presumption is that re-encoding is lossy, but fully digital copies can be made with modified viewing devices. For example, HDCP to unencrypted DVI adapters exist on the market and can be used by infringers to re-encode digital copies without modifying players. There also exists adapters that will split HDCP-encumbered HDMI stream into a non-encrypted DVI and S/PDIF streams, both digital, allowing for next-to-lossless reconstruction of digital copies with complete video and audio streams. Further, infringers can make copies through the analog hole. Modern HD televisions are merely 2 megapixels in resolution and the HD specification will be static for at least two decades, as high-expense consumer product cycles are necessarily long and higher resolution provides decreasing benefit to the consumer. By the time the specification is mid-life, cameras with 20 megapixel resolution will be available and able to record full-motion video, allowing for full two-axis oversampling and software reconstruction of the original stream pixel-by-pixel, with the only analog losses being encoded as slight variations in pixel color--and even this loss can be compensated for with color profile adjustment after the re-encode has completed. It would not be possible to compensate for possible compression of the color space dynamic, however, leading to a slight posterizing effect. This effect is already apparent in compressed video and does not seem to bother most consumers.
External links
About Self-Protecting Digital Content
Self-Protecting Digital Content - A Technical Report from the CRI Content Security Research Initiative
Digital rights management
DVD |
54217580 | https://en.wikipedia.org/wiki/Cycnus%20of%20Kolonai | Cycnus of Kolonai | In Greek mythology, Cycnus (Ancient Greek: Κύκνος means "swan") or Cygnus, was the king of the town of Kolonai in the southern Troad.
Family
Cycnus was the son of Poseidon by Calyce (daughter of Hecaton), Harpale, or by Scamandrodice. According to John Tzetzes, his mother Scamandrodice abandoned him on the seashore, but he was rescued by fishermen who named him Cycnus "swan" because they saw a swan flying over him. In another account, he was said to have had womanly white skin and fair hair, which was why he received his name that meant "swan".
Cycnus married first Procleia, daughter of King Laomedon of Troy or of Laomedon's son Clytius. Cycnus and Procleia had two children, named Tenes and Hemithea, although Tenes claimed the god Apollo as his father. On Procleia's death, Cycnus married Philonome, daughter of Tragasus (Cragasus), also known as Polyboea or Scamandria.
Dictys Cretensis mentions three more children of Cycnus: two sons, Cobis and Corianus, and a daughter Glauce.
Mythology
Philonome fell in love with her handsome stepson, Tenes. Tenes rejected Philonome's advances, whereupon Philonome falsely accused Tenes before her husband of having ravished her. Cycnus ordered to place both his children in a chest and throw it into the sea. However, Cycnus discovered the truth and had Philonome buried alive. When he found that his children had survived and were reigning at Tenedos, he sailed there intending to reconcile with them, but Tenes cut the anchor rope of his ship.
Cycnus later supported the Trojans in the Trojan War, and fought valiantly, killing one thousand opponents according to Ovid. According to some accounts he killed the Greek hero Protesilaus, but according to others, Cycnus attacked the Greek camp when the funeral of Protesilaus was underway. It was said that Cycnus, being the son of Poseidon, was invulnerable to spear and sword attack. When Achilles confronted Cycnus he could not kill him via conventional weaponry so he crushed and suffocated him. After his death, Cycnus was changed into a swan. Later, the Greek army invaded Cycnus's kingdom, but the people of Colonae implored them to spare the city. The Greek leaders agreed, on condition that Cobis, Corianus and Glauce be handed over to them, and made a truce with the citizens.
Notes
References
Conon, Fifty Narrations, surviving as one-paragraph summaries in the Bibliotheca (Library) of Photius, Patriarch of Constantinople translated from the Greek by Brady Kiesling. Online version at the Topos Text Project.
Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project.
Diodorus Siculus, The Library of History translated by Charles Henry Oldfather. Twelve volumes. Loeb Classical Library. Cambridge, Massachusetts: Harvard University Press; London: William Heinemann, Ltd. 1989. Vol. 3. Books 4.59–8. Online version at Bill Thayer's Web Site
Diodorus Siculus, Bibliotheca Historica. Vol 1-2. Immanel Bekker. Ludwig Dindorf. Friedrich Vogel. in aedibus B. G. Teubneri. Leipzig. 1888-1890. Greek text available at the Perseus Digital Library.
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Publius Ovidius Naso, Metamorphoses translated by Brookes More (1859-1942). Boston, Cornhill Publishing Co. 1922. Online version at the Perseus Digital Library.
Publius Ovidius Naso, Metamorphoses. Hugo Magnus. Gotha (Germany). Friedr. Andr. Perthes. 1892. Latin text available at the Perseus Digital Library.
Strabo, The Geography of Strabo. Edition by H.L. Jones. Cambridge, Mass.: Harvard University Press; London: William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Strabo, Geographica edited by A. Meineke. Leipzig: Teubner. 1877. Greek text available at the Perseus Digital Library.
Trojans
Kings in Greek mythology
Metamorphoses characters |
28351278 | https://en.wikipedia.org/wiki/Gurpreet%20Singh%20Lehal | Gurpreet Singh Lehal | Dr Gurpreet Singh Lehal () (born 6 February 1963) is a professor in the Computer Science Department, Punjabi University, Patiala and Director of the Advanced Centre for Technical Development of Punjabi Language Literature and Culture. He is noted for his work in the application of computer technology in the use of the Punjabi language both in the Gurmukhi and Shahmukhi script.
A post graduate in Mathematics from Panjab University, he did his masters in Computer Science from Thapar Institute of Engineering and Technology and Ph.D. in Computer Science on Gurmukhi Optical Character Recognition (OCR) System from Punjabi University, Patiala.
Background
As a researcher, Dr. Lehal’s main contribution has been development of technologies related to computerization of Punjabi language. Prominent among these are first Gurmukhi OCR, first bilingual Gurmukhi/Roman OCR, first Punjabi font identification and conversion system, first multi-font Punjabi spell checker, first high accuracy Gurmukhi-Shahmukhi and Shahmukhi-Gurmukhi transliteration systems and first Intelligent Predictive Roman-Gurmukhi transliteration techniques for simplifying Punjabi typing. Dr. Lehal has published more than 100 research papers in various national and international journals and conference proceedings. Dr. Lehal has handled research projects worth more than 43 million Rupees including three international projects, which were awarded in an open competition among contestants from more than 30 countries. As a software engineer, Dr. Lehal has developed more than 25 software systems including first commercial Punjabi word processor, Akhar. As an academician, Dr. Lehal has taught and supervised research activity of postgraduate and doctorate students. He has guided more than 100 post graduate Research scholars and 11 PhD students on various topics related to computerization of Punjabi, Hindi, Urdu and Sindhi languages.
Main achievements
Dr. Lehal has been working for more than fifteen years on different projects related to computerization of Punjabi, Hindi, Urdu and Sindhi languages and has been a pioneer in developing technical solutions for these languages. For the first time, many new technologies have been developed by him including Intelligent Predictive Roman-Gurmukhi transliteration techniques for simplifying Punjabi typing, Punjabi spell checker, Intelligent Punjabi and Hindi font converter, bilingual Gurmukhi/Roman OCR and Sindhi-Devnagri transliteration . Many other products for popularizing Punjabi and breaking the script and language barriers have been developed under his leadership. Some of these products which are being widely used include a multi-media based website for Punjabi teaching, Gurmukhi-Shahmukhi transliteration utility, Punjabi-Hindi translation software, Urdu-Hindi transliteration software, Punjabi Search Engine, Punjabi Text-to-Speech Synthesis System, Punjabi text summarization system and Punjabi grammar checker
Language Software and Technologies developed
First Gurmukhi Optical Character Recognition System
First Bilingual Gurmukhi/Roman Optical Character Recognition System
First Punjabi word processor (Akhar)
First Intelligent Predictive Romanized typing utility for Gurmukhi text
First Punjabi font to Unicode & Reverse conversion utility
First Intelligent Punjabi/Hindi Font Recognition System
First Sindhi to Devnagri and reverse Transliteration System
First Punjabi Text Summarization System (Project Leader)
First Punjabi Text to Speech Synthesis System (Project Leader)
Urdu Optical Character Recognition System
Sodhak:: Punjabi Spell Checker
Urdu/Kashmiri to Roman Script Transliteration Software
Urdu to Devnagri Transliteration Software
Devnagri to Urdu Transliteration Software
Gurmukhi-Shahmukhi (Urdu) transliteration Software
PunjabiKhoj, Customized Search Engine for Punjabi (Project Leader)
Shahmukhi (Urdu) to Gurmukhi Transliteration online software (Project Leader)
Online Punjabi teaching website (Project Leader)
Multi-media enabled Gurmukhi-Shahmukhi-English Dictionary (Project Leader)
Punjabi to Hindi Machine Translation System (Project Leader)
Hindi to Punjabi Machine Translation System (Project Leader)
Punjabi Grammar Checker (Project Leader)
Punjabi Morphological Analyser & Generator (Project Leader)
Gurmukhi to Roman Transliteration System (Project Leader)
External links
Balle Balle Software, The Tribune, 21/8/2004
Software to convert Punjabi script to Shahmukhi script, The Tribune, 6/9/2004
Breaking the script barrier, Asian Affairs, May 2009
Punjabi varsity develops 'text-to-speech' software for blind, Times of India, 22 December 2012
Patiala University's online Punjabi spellchecker hailed, Hindustan Times, 30 August 2014
Software to melt India, Pakistan’s Sindhi script barrier, Times of India, 3 September 2014
Living people
1963 births
Punjabi University faculty |
47314270 | https://en.wikipedia.org/wiki/Meizu%20M2%20Note | Meizu M2 Note | The Meizu M2 Note is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Flyme OS, Meizu's modified Android operating system. It is a previous phablet model of the M series, succeeding the Meizu M1 Note and preceding the Meizu M3 Note. It was unveiled on June 2, 2015 in Beijing.
History
Initial rumors appeared in May 2015 after a possible render image of the new device had been leaked. Unlike the predecessor, the M1 Note, the new device would have a physical home button instead of a capacitive one.
On May 29, the specifications of the device have been allegedly sighted on the Android benchmark application GFXBench. According to this release, the device would feature 2 GB of RAM and a MediaTek MTK MT6753 system-on-a-chip.
The following day, Meizu officially confirmed that there will be a launch event for the M2 Note in Beijing on June 2.
Release
As announced, the M2 Note was released in Beijing on June 2, 2015.
The M2 Note was launched in India on August 10, 2015.
Over 1.2 million devices have been sold in the first month after the release.
Features
Flyme
The Meizu M2 Note was released with an updated version of Flyme OS, a modified operating system based on Android Lollipop. It features an alternative, flat design and improved one-handed usability.
Hardware and design
The Meizu M2 Note features a MediaTek MTK MT6753 system-on-a-chip with an array of eight ARM Cortex-A53 CPU cores, an ARM Mali-T720 MP3 GPU and 2 GB of RAM.
The M2 Note reaches a score of 31,890 points on the AnTuTu benchmark.
The M2 Note is available in four different colors (white, blue, pink and grey) and comes with 2 GB of RAM and 16 GB or 32 GB of internal storage.
The M2 Note measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front.
Unlike most other Android smartphones, the M2 Note doesn't have capacitive buttons nor on-screen buttons. The functionality of these keys is implemented using a technology called mBack, which makes use of gestures with the physical button. Unlike some other Meizu devices, a fingerprint sensor isn't integrated into the home button.
The M2 Note features a fully laminated 5.5-inch IGZO capacitive touchscreen display with a FHD resolution of 1080 by 1920 pixels. The pixel density of the display is 403 ppi.
In addition to the touchscreen input and the front key, the device has volume/zoom control buttons and the power/lock button on the right side, a 3.5mm TRS audio jack on the top and a microUSB (Micro-B type) port on the bottom for charging and connectivity.
The Meizu M2 Note has two cameras. The rear camera has a resolution of 13 MP, a ƒ/2.2 aperture, a 5-element lens, autofocus and an LED flash.
The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 4-element lens.
Reception
The M2 Note received generally positive reviews.
Android Authority gave the M2 Note a rating of 8 out of 10 possible points and concluded that “the Meizu M2 Note packs a very large punch for its price”. Furthermore, the build quality, battery life and the good display were praised.
PhoneArena rated the M2 Note with 9 out of 10 possible points and stated that the “Meizu m2 Note has a lot to offer, as it’s a phone that comes with no huge compromises”.
Android Headlines also reviewed the device and concluded that “the M2 Note is a great looking smartphone and can get everything you need done”.
See also
Meizu
Meizu M1 Note
Meizu M3 Note
Comparison of smartphones
References
External links
Official product page Meizu
Android (operating system) devices
Mobile phones introduced in 2015
Meizu smartphones
Discontinued smartphones |
14640467 | https://en.wikipedia.org/wiki/GridPoint | GridPoint | GridPoint is a cleantech company that provides energy management and sustainability services to enterprises and government agencies, such as electric utilities.
GridPoint services include building management systems, equipment-level submetering and monitoring hardware, software analytics, and related energy management services.
History and growth
GridPoint was founded in 2003 by Peter L. Corsell. Through its early growth and into 2009, GridPoint's Smart Grid Platform provided an intelligent network for utilities to integrate load measurement and control devices, energy storage technologies, and renewable energy sources into the electric grid.
In 2009, the company began leveraging the data and analytics research and knowledge from its utility-facing smart grid technologies to develop energy management solutions for the enterprise and government sectors.
Later that year, GridPoint acquired ADMMicro, a developer of energy management systems for the commercial and industrial (C&I) sector and Lixar, a cloud-based energy management software technology company.
In 2013, former Berkshire Hathaway executive Todd Raba joined GridPoint as CEO. Raba had previously run the Berkshire Hathaway companies Johns Manville and MidAmerican Energy Company.
In 2016, Mark Danzenbaker became GridPoint's CEO. he had been with GridPoint since 2009.
In 2018, GridPoint partners with Shell and Sparkfund to launch a new smart building subscription solution for commercial businesses.
In October 2019, GridPoint announced an investment by Hannon Armstrong which allows the company to offer its energy management platform as an all-inclusive service, requiring zero capital down with a monthly pricing structure.
Technology and services
GridPoint's integrated energy management solution includes hardware, software and services that collect data about equipment-level energy consumption and building environmental conditions, aggregate and analyze that data and then communicate what actions can be taken to reduce energy consumption and carbon emissions, improve operational efficiency and capital utilization and help ensure business continuity.
Energy management hardware
GridPoint's real-time controllers manage lighting schedules and HVAC (heating, ventilation and air conditioning) temperature setpoints across a network of facilities. Equipment-level submeters measure circuit-level power consumption for equipment such as individual HVAC units, chiller boiler systems, lighting, refrigeration, kitchen equipment, plug loads and signage and monitoring devices collect environmental data such as temperature, humidity and CO2 levels. Captured data is then fed into the GridPoint Energy Manager software platform.
Energy management software
GridPoint's energy management software platform, GridPoint Energy Manager, is a cloud-based data aggregation and analytics service that presents equipment-level energy consumption and building environmental information through SaaS-based dashboards, reports and alarms. The software platform can be used with either GridPoint equipment or third-party hardware via open, standards-based public interfaces. The software platform also includes demand response functionality and distributed energy resources integration.
Energy management services
GridPoint's energy management services include energy advisory services, advanced reporting, custom savings analyses, alarm management, facility triage, equipment diagnoses and training.
Utility solutions
GridPoint's current utility solutions include enterprise energy management and submetering systems to support utilities' commercial and industrial customers by providing information about energy consumption patterns and load and storage management solutions that capture and dispatch energy by storage assets located in a utility's transmission and distribution system. For residential utility customers, GridPoint provides software-based dashboards that provide information about energy consumption, predicted energy usage and carbon impact.
GridPoint was designated as a 2008 Technology Pioneer by the World Economic Forum.
GridPoint's institutional investors include Goldman Sachs and New Enterprise Associates (NEA).
References
External links
Energy companies of the United States
Energy conservation in the United States
Software companies based in Virginia
Companies based in Arlington County, Virginia
Software companies of the United States |
3446750 | https://en.wikipedia.org/wiki/Lycomedes%20of%20Thebes | Lycomedes of Thebes | In Greek mythology, Lycomedes (Ancient Greek: Λυκομήδης Lykomedes) was a Theban armed sentry with Thrasymedes, son of Nestor during the Trojan War.
Family
Lycomedes was the son of Theban regent, Creon of Thebes and possibly his wife Eurydice or Henioche, and thus, the brother of Menoeceus (Megareus), Haemon, Megara, Pyrrha and Henoiche.
Mythology
Lycomedes fought on the side of the Argives in the Trojan War. No real significant background is given about him in the Iliad. He was listed among the younger leaders and was not a king but of second rank. In the tenth year of the struggle when the Trojans have surrounded the Greeks in their ship's camp, Lycomedes stood as one of the seven guard commanders at nighttime in Book IX at the Greek wall. Other six captains of the sentinels were Thrasymedes, Ascalaphus, Ialmenus, Meriones, Aphareus and Deïpyrus.
When Telamonian Ajax and Teucer had to leave their position on Hector's assault on the wall to deal with Sarpedon's division, Aias ordered Lykomedes to help Ajax the Lesser to help deal with Hector's press. He also continued in action when Hector and the Trojan forces broke through the Greek wall. A day later, when Patroclus threw himself back into battle, the Greeks knew how to break through the encirclement and the comrade-in-law of Lycomedes, Liocritus was killed. With great sadness, Lycomedes who saw what had happen, rushed off on his dead friend. Once there, he cast of his bright spear to smote Trojan leader Apisaon in the liver below the midriff and straightway loosed his knees.
Later on, Lycomedes was one of the Greeks who takes gifts for Achilles from the tent of King Agamemnon as these two decisions to settle their dispute. During later fights, Lycomedes was wounded on his wrist or head and ankle by the Trojan Agenor.
Notes
References
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. Greek text available at the Perseus Digital Library.
Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. Online version at the Perseus Digital Library
Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library.
Achaeans (Homer)
de:Lykomedes
sv:Lykomedes |
4654822 | https://en.wikipedia.org/wiki/Saint%20James%20School%20%28Montgomery%2C%20Alabama%29 | Saint James School (Montgomery, Alabama) | Saint James School is an independent, nonsectarian, college preparatory school located in Montgomery, Alabama, United States. Established in 1955, Saint James School, Montgomery's oldest private school, serves about 900 students in pre-kindergarten through grade 12.
History
Saint James School began in 1955 as a small independent elementary school, housed in Saint James United Methodist Church, committed to developing the individual potential within each white child entrusted to its care. In 1970, the school's enrollment doubled after the court ordered the desegregation of public schools. That same year, the school purchased property and opened a second campus, providing classrooms and facilities to meet the needs of a growing student body population including junior and high school students.
In 1972, a federal judge prohibited the city of Montgomery from allowing the school and three other private schools from using city recreational facilities, due to the fact that the schools in question either refused to admit black students by policy or claimed to accept black students and teachers but remained all white.
By 1974, Saint James School had 951 students enrolled in kindergarten through twelfth grade and graduated its first high school class. The two campuses united two years later.
In 1976 the Saint James School, along with the Montgomery Academy, was named in a suit filed against United States Secretary of the Treasury William Simon and Commissioner of Internal Revenue Donald C. Alexander by five black women from Montgomery charging that the two men had encouraged the development of segregated schools by allowing them tax-deductible status.
In 1982, the school leased a new facility on a 30-acre campus, in the city's booming eastern section, which would serve as key to the school's future development and expansion. In 1984, Winton Blount, benefactor of the Alabama Shakespeare Festival, endowed Saint James School the funds to build a state-of-the art Fine Arts Building. Blount's commitment changed the future of Saint James by challenging school leaders to adopt a dedication to excellence in the fine arts.
In 1991, a modern new high school building was constructed on the Vaughn Road campus, and after a devastating tornado destroyed the school's elementary site on Vaughn Road (in the early morning hours on March 6, 1996) more construction was soon underway. Within 18 months, a new campus greeted returning students. Designed and organized like a small college campus, the modern facilities included a new middle school, a new high school building, a large gym, a performing arts building, as well as brand new elementary buildings. Both Saint James campuses were consolidated at the Vaughn Road site in 2002-03, enabling the co-location of elementary, middle, and high schools all at one site.
Saint James School admits students of any race, religion, color, gender, creed, and national and ethnic origin. The school serves a robust international population with students from approximately 15 different countries each year. In addition, the school offers a wide breadth of innovative competitive academic, athletic, and award-winning visual/performing programs that help develop well-rounded students and prepare them for lives of responsibility, service, and achievement.
Athletics
Saint James School is a member of the Alabama High School Athletic Association. The school's athletic teams, the Trojans, compete in Division 3A, except the school's volleyball team that competes in 5A after winning the state championship in 2017.
Athletics include baseball, fast pitch softball, basketball, cheerleading, cross country, football, golf, soccer, tennis, track, volleyball, equestrian, and wrestling.
The Trojans baseball team won state championships in Division 2A in 1991 and in Division 4A in 2006.
The Lady Trojans softball team won Division 4A state championships in 1996 (1A-4A), 2001, 2005, 2006 and 2008.
The Lady Trojans volleyball team won Division 4A state championships in 2001, 2003, and 2017.
The Lady Trojans indoor track team won a state title in 2002 in 1A-4A.
The Lady Trojans tennis team won a state title in 2003 under division 4A and another title in 2019 under division 1A-3A.
Varsity wrestling won a state title in 2013.
Varsity golf won a state title in 2015.
Varsity Girls Indoor Track and Field won the 4A state title in 2019 and the 1A-3A state title in 2020.
Varsity Girls Outdoor Track and Field won the 1A-3A state title in 2019.
References
External links
Saint James School website
Saint James Trojan athletics
Schools in Montgomery, Alabama
Private middle schools in Alabama
Private elementary schools in Alabama
Private high schools in Alabama
High schools in Montgomery, Alabama
Educational institutions established in 1955
Preparatory schools in Alabama
Segregation academies in Alabama
1955 establishments in Alabama |
920901 | https://en.wikipedia.org/wiki/Software%20versioning | Software versioning | Software versioning is the process of assigning either unique version names or unique version numbers to unique states of computer software. Within a given version number category (e.g., major or minor), these numbers are generally assigned in increasing order and correspond to new developments in the software. At a fine-grained level, revision control is often used for keeping track of incrementally-different versions of information, whether or not this information is computer software.
Modern computer software is often tracked using two different software versioning schemes: an internal version number that may be incremented many times in a single day, such as a revision control number, and a release version that typically changes far less often, such as semantic versioning or a project code name.
Schemes
A variety of version numbering schemes have been created to keep track of different versions of a piece of software. The ubiquity of computers has also led to these schemes being used in contexts outside computing.
Sequence-based identifiers
In sequence-based software versioning schemes, each software release is assigned a unique identifier that consists of one or more sequences of numbers or letters. This is the extent of the commonality; schemes vary widely in areas such as the number of sequences, the attribution of meaning to individual sequences, and the means of incrementing the sequences.
Change significance
In some schemes, sequence-based identifiers are used to convey the significance of changes between releases. Changes are classified by significance level, and the decision of which sequence to change between releases is based on the significance of the changes from the previous release, whereby the first sequence is changed for the most significant changes, and changes to sequences after the first represent changes of decreasing significance.
Depending on the scheme, significance may be assessed by lines of code changed, function points added or removed, the potential impact on customers in terms of work required to adopt a new version, risk of bugs or undeclared breaking changes, degree of changes in visual layout, the number of new features, or almost anything the product developers or marketers deem to be significant, including marketing desire to stress the "relative goodness" of the new version.
(aka SemVer) is a widely-adopted version scheme that uses a three-part version number (Major.Minor.Patch), an optional pre-release tag, and an optional build meta tag. In this scheme, risk and functionality are the measures of significance. Breaking changes are indicated by increasing the major number (high risk); new, non-breaking features increment the minor number (medium risk); and all other non-breaking changes increment the patch number (lowest risk). The presence of a pre-release tag (-alpha, -beta) indicates substantial risk, as does a major number of zero (0.y.z), which is used to indicate a work-in-progress that may contain any level of potentially breaking changes (highest risk). As an example of inferring compatibility from a SemVer version, software which relies on version 2.1.5 of an API is compatible with version 2.2.3, but not necessarily with 3.2.4.
Developers may choose to jump multiple minor versions at a time to indicate that significant features have been added, but are not enough to warrant incrementing a major version number; for example Internet Explorer 5 from 5.1 to 5.5, or Adobe Photoshop 5 to 5.5. This may be done to emphasize the value of the upgrade to the software user, or, as in Adobe's case, to represent a release halfway between major versions (although levels of sequence based versioning are not necessarily limited to a single digit, as in Blender version 2.91 or Minecraft Java Edition after 1.10).
A different approach is to use the major and minor numbers, along with an alphanumeric string denoting the release type, e.g. "alpha" (a), "beta" (b), or "release candidate" (rc). A software release train using this approach might look like 0.5, 0.6, 0.7, 0.8, 0.9 → 1.0b1, 1.0b2 (with some fixes), 1.0b3 (with more fixes) → 1.0rc1 (which, if it is stable enough), 1.0rc2 (if more bugs are found) → 1.0. It is a common practice in this scheme to lock out new features and breaking changes during the release candidate phases, and for some teams, even betas are locked down to bug fixes only, to ensure convergence on the target release.
Other schemes impart meaning on individual sequences:
major.minor[.build[.revision]] (example: 1.2.12.102)
major.minor[.maintenance[.build]] (example: 1.4.3.5249)
Again, in these examples, the definition of what constitutes a "major" as opposed to a "minor" change is entirely subjective and up to the author, as is what defines a "build", or how a "revision" differs from a "minor" change.
Shared libraries in Solaris and Linux may use the current.revision.age format where:
current: The most recent interface number that the library implements.
revision: The implementation number of the current interface.
age: The difference between the newest and oldest interfaces that the library implements. This use of the third field is specific to libtool: others may use a different meaning or simply ignore it.
A similar problem of relative change significance and versioning nomenclature exists in book publishing, where edition numbers or names can be chosen based on varying criteria.
In most proprietary software, the first released version of a software product has version 1.
Some projects use the major version number to indicate incompatible releases. Two examples are Apache Portable Runtime (APR) and the FarCry CMS.
Often programmers write new software to be backward compatible, i.e., the new software is designed to interact correctly with older versions of the software (using old protocols and file formats) and the most recent version (using the latest protocols and file formats). For example, IBM z/OS is designed to work properly with 3 consecutive major versions of the operating system running in the same sysplex.
This enables people who run a high availability computer cluster to keep most of the computers up and running while one machine at a time is shut down, upgraded, and restored to service.
Often packet headers and file format include a version number – sometimes the same as the version number of the software that wrote it; other times a "protocol version number" independent of the software version number.
The code to handle old deprecated protocols and file formats is often seen as cruft.
Designating development stage
Software in the experimental stage (alpha or beta) often uses a zero in the first ("major") position of the sequence to designate its status. However, this scheme is only useful for the early stages, not for upcoming releases with established software where the version number has already progressed past 0.
A number of schemes are used to denote the status of a newer release:
Alphanumeric suffix is a common scheme adopted by semantic versioning. In this scheme, versions have affixed a dash plus some alphanumeric characters to indicate the status.
Numeric status is a scheme that uses numbers to indicate the status as if it's part of the sequence. A typical choice is the third position for the four-position versioning.
Numeric 90+ is another scheme that uses numbers, but apparently under a number of a previous version. A large number in the last position, typically 90 or higher, is used. This is commonly used by older open-source projects like Fontconfig.
The two purely numeric forms removes the special logic required to handle the comparison of "alpha < beta < rc < no prefix" as found in semantic versioning, at the cost of clarity. (Semantic versioning actually does not specify specific terms for development stages; the comparison is simply in lexicographical order.)
Incrementing sequences
There are two schools of thought regarding how numeric version numbers are incremented. Most free and open-source software packages, including MediaWiki, treat versions as a series of individual numbers, separated by periods, with a progression such as 1.7.0, 1.8.0, 1.8.1, 1.9.0, 1.10.0, 1.11.0, 1.11.1, 1.11.2, and so on.
On the other hand, some software packages identify releases by decimal numbers: 1.7, 1.8, 1.81, 1.82, 1.9, etc. Decimal versions were common in the 1980s, for example with NetWare, DOS, and Microsoft Windows, but even in the 2000s have been for example used by Opera and Movable Type. In the decimal scheme, 1.81 is the minor version following 1.8, while maintenance releases (i.e. bug fixes only) may be denoted with an alphabetic suffix, such as 1.81a or 1.81b.
The standard GNU version numbering scheme is major.minor.revision, but Emacs is a notable example using another scheme where the major number (1) was dropped and a user site revision was added which is always zero in original Emacs packages but increased by distributors. Similarly, Debian package numbers are prefixed with an optional "epoch", which is used to allow the versioning scheme to be changed.
Resetting
In some cases, developers may decide to reset the major version number. This is sometimes used to denote a new development phase being released. For example, Minecraft Alpha ran from version 1.0.0 to 1.2.6, and when Beta was released, it reset the major version number and ran from 1.0 to 1.8. Once the game was fully released, the major version number again reset to 1.0.0.
Separating sequences
When printed, the sequences may be separated with characters. The choice of characters and their usage varies by the scheme. The following list shows hypothetical examples of separation schemes for the same release (the thirteenth third-level revision to the fourth second-level revision to the second first-level revision):
A scheme may use the same character between all sequences: 2.4.13, 2/4/13, 2-4-13
A scheme choice of which sequences to separate may be inconsistent, separating some sequences but not others: 2.413
A scheme's choice of characters may be inconsistent within the same identifier: 2.4_13
When a period is used to separate sequences, it may or may not represent a decimal point—see "Incrementing sequences" section for various interpretation styles.
Number of sequences
There is sometimes a fourth, unpublished number which denotes the software build (as used by Microsoft). Adobe Flash is a notable case where a four-part version number is indicated publicly, as in 10.1.53.64. Some companies also include the build date. Version numbers may also include letters and other characters, such as Lotus 1-2-3 Release 1a.
Negative numbers
Some projects use negative version numbers. One example is the SmartEiffel compiler which started from −1.0 and counted upwards to 0.0.
Date of release
Many projects use a date-based versioning scheme called Calendar Versioning (aka CalVer).
Ubuntu Linux is one example of a project using calendar versioning; Ubuntu 18.04, for example, was released April 2018. This has the advantage of being easily relatable to development schedules and support timelines. Some video games also use date as versioning, for example the arcade game Street Fighter EX. At startup it displays the version number as a date plus a region code, for example 961219 ASIA.
When using dates in versioning, for instance, file names, it is common to use the ISO 8601 scheme YYYY-MM-DD, as this is easily string-sorted in increasing or decreasing order. The hyphens are sometimes omitted. The Wine project formerly used a date versioning scheme, which used the year followed by the month followed by the day of the release; for example, "Wine 20040505".
Microsoft Office build numbers are an encoded date: the first two digits indicate the number of months that have passed from the January of the year in which the project started (with each major Office release being a different project), while the last two digits indicate the day of that month. So 3419 is the 19th day of the 34th month after the month of January of the year the project started.
Other examples that identify versions by year include Adobe Illustrator 88 and WordPerfect Office 2003. When a year is used to denote version, it is generally for marketing purposes, and an actual version number also exists. For example, Microsoft Windows 95 is internally versioned as MS-DOS 7.00 and Windows 4.00; likewise, Microsoft Windows 2000 Server is internally versioned as Windows NT 5.0 ("NT" being a reference to the original product name).
Python
The Python Software Foundation has published PEP 440 – Version Identification and Dependency Specification, outlining their own flexible scheme, that defines an epoch segment, a release segment, pre-release and post-release segments and a development release segment.
TeX
TeX has an idiosyncratic version numbering system. Since version 3, updates have been indicated by adding an extra digit at the end, so that the version number asymptotically approaches ; a form of unary numbering – the version number is the number of digits. The current version is 3.14159265. This is a reflection of TeX being very stable, and only minor updates are anticipated. TeX developer Donald Knuth has stated that the "absolutely final change (to be made after [his] death)" will be to change the version number to , at which point all remaining bugs will become permanent features.
In a similar way, the version number of Metafont asymptotically approaches .
Apple
During the era of the classic Mac OS, minor version numbers rarely went beyond ".1". When they did, they usually jumped straight to ".5", suggesting the release was "more significant". Thus, "8.5" was marketed as its own release, representing "Mac OS 8 and a half", and 8.6 effectively meant "8.5.1".
Mac OS X departed from this trend, in large part because "X" (the Roman numeral for 10) was in the name of the product. As a result, all versions of OS X began with the number 10. The first major release of OS X was given the version number 10.0, but the next major release was not 11.0. Instead, it was numbered 10.1, followed by 10.2, 10.3, and so on for each subsequent major release. Thus the 11th major version of OS X was labeled "10.10". Even though the "X" was dropped from the name as of macOS 10.12, this numbering scheme continued through macOS 10.15. Under the "X"-based versioning scheme, the third number (instead of the second) denoted a minor release, and additional updates below this level, as well as updates to a given major version of OS X coming after the release of a new major version, were titled Supplemental Updates.
The Roman numeral X was concurrently leveraged for marketing purposes across multiple product lines. Both QuickTime and Final Cut Pro jumped from version 7 directly to version 10, QuickTime X and Final Cut Pro X. Like Mac OS X itself, the products were not upgrades to previous versions, but brand-new programs. As with OS X, major releases for these programs incremented the second digit and minor releases were denoted using a third digit. The "X" was dropped from Final Cut's name with the release of macOS 11.0 (see below), and QuickTime's branding became moot when the framework was deprecated in favor of AVFoundation in 2011 (the program for playing QuickTime video was only named QuickTime Player from the start).
Apple's next macOS release, provisionally numbered 10.16, was officially announced as macOS 11.0 at WWDC in June 2020. The following macOS version, macOS Monterey was released in WWDC 2021 and bumped its major version number to 12.
Microsoft Windows
The Microsoft Windows operating system was first labelled with standard version numbers for Windows 1.0 through Windows 3.11. After this Microsoft excluded the version number from the product name. For Windows 95 (version 4.0), Windows 98 (4.10) and Windows 2000 (5.0), year of the release was included in the product title. After Windows 2000, Microsoft created the Windows Server family which continued the year-based style with a difference: For minor releases, Microsoft suffixed "R2" to the title, e.g., Windows Server 2008 R2 (version 6.1). This style had remained consistent to this date. The client versions of Windows however did not adopt a consistent style. First, they received names with arbitrary alphanumeric suffixes as with Windows ME (4.90), Windows XP (5.1), and Windows Vista (6.0). Then, once again Microsoft adopted incremental numbers in the title, but this time, they were not versioning numbers; the version numbers of Windows 7, Windows 8 and Windows 8.1 are respectively 6.1, 6.2 and 6.3. In Windows 10, the version number leaped to 10.0 and subsequent updates to the OS only incremented build number and update build revision (UBR) number.
The successor of Windows 10, Windows 11, was released on October 5, 2021. Despite being named "11", the new Windows release didn't bump its major version number to 11. Instead, it stayed at the same version number of 10.0, used by Windows 10.
Other schemes
Some software producers use different schemes to denote releases of their software. The Debian project uses a major/minor versioning scheme for releases of its operating system but uses code names from the movie Toy Story during development to refer to stable, unstable, and testing releases.
BLAG Linux and GNU features very large version numbers: major releases have numbers such as 50000 and 60000, while minor releases increase the number by 1 (e.g. 50001, 50002). Alpha and beta releases are given decimal version numbers slightly less than the major release number, such as 19999.00071 for alpha 1 of version 20000, and 29999.50000 for beta 2 of version 30000. Starting at 9001 in 2003, the most recent version is 140000.
Urbit uses Kelvin versioning (named after the absolute Kelvin temperature scale): software versions start at a high number and count down to version 0, at which point the software is considered finished and no further modifications are made.
Internal version numbers
Software may have an "internal" version number which differs from the version number shown in the product name (and which typically follows version numbering rules more consistently). Java SE 5.0, for example, has the internal version number of 1.5.0, and versions of Windows from NT 4 on have continued the standard numerical versions internally: Windows 2000 is NT 5.0, XP is Windows NT 5.1, Windows Server 2003 and Windows XP Professional x64 Edition are NT 5.2, Windows Server 2008 and Vista are NT 6.0, Windows Server 2008 R2 and Windows 7 are NT 6.1, Windows Server 2012 and Windows 8 are NT 6.2, and Windows Server 2012 R2 and Windows 8.1 are NT 6.3, however the first version of Windows 10 was 10.0 (10.0.10240). Note, however, that Windows NT is only on its fifth major revision, as its first release was numbered 3.1 (to match the then-current Windows release number) and the Windows 10 launching made a version leap from 6.3 to 10.0.
Pre-release versions
In conjunction with the various versioning schemes listed above, a system for denoting pre-release versions is generally used, as the program makes its way through the stages of the software release life cycle.
Programs that are in an early stage are often called "alpha" software, after the first letter in the Greek alphabet. After they mature but are not yet ready for release, they may be called "beta" software, after the second letter in the Greek alphabet. Generally alpha software is tested by developers only, while beta software is distributed for community testing.
Some systems use numerical versions less than 1 (such as 0.9), to suggest their approach toward a final "1.0" release. This is a common convention in open source software. However, if the pre-release version is for an existing software package (e.g. version 2.5), then an "a" or "alpha" may be appended to the version number. So the alpha version of the 2.5 release might be identified as 2.5a or 2.5.a.
An alternative is to refer to pre-release versions as "release candidates", so that software packages which are soon to be released as a particular version may carry that version tag followed by "rc-#", indicating the number of the release candidate; when the final version is released, the "rc" tag is removed.
Release train
A software release train is a form of software release schedule in which a number of distinct series of versioned software releases for multiple products are released as a number of different "trains" on a regular schedule. Generally, for each product line, a number of different release trains are running at a given time, with each train moving from initial release to eventual maturity and retirement on a planned schedule. Users may experiment with a newer release train before adopting it for production, allowing them to experiment with newer, "raw", releases early, while continuing to follow the previous train's point releases for their production systems prior to moving to the new release train as it becomes mature.
Cisco's IOS software platform used a release train schedule with many distinct trains for many years. More recently, a number of other platforms including Firefox and Fenix for Android, Eclipse, LibreOffice, Ubuntu, Fedora, Python, digiKam and VMware have adopted the release train model.
Modifications to the numeric system
Odd-numbered versions for development releases
Between the 1.0 and the 2.6.x series, the Linux kernel used odd minor version numbers to denote development releases and even minor version numbers to denote stable releases; see . For example, Linux 2.3 was a development family of the second major design of the Linux kernel, and Linux 2.4 was the stable release family that Linux 2.3 matured into. After the minor version number in the Linux kernel is the release number, in ascending order; for example, Linux 2.4.0 → Linux 2.4.22. Since the 2004 release of the 2.6 kernel, Linux no longer uses this system, and has a much shorter release cycle.
The same odd-even system is used by some other software with long release cycles, such as Node.js up to version 0.12 as well as GNOME and WineHQ.
Dropping the most significant element
Sun's Java has at times had a hybrid system, where the internal version number has always been 1.x but has been marketed by reference only to the x:
JDK 1.0.3
JDK 1.1.2 through 1.1.8
J2SE 1.2.0 ("Java 2") through 1.4.2
Java 1.5.0, 1.6.0, 1.7.0, 1.8.0 ("Java 5, 6, 7, 8")
Sun also dropped the first digit for Solaris, where Solaris 2.8 (or 2.9) is referred to as Solaris 8 (or 9) in marketing materials.
A similar jump took place with the Asterisk open-source PBX construction kit in the early 2010s, whose project leads announced that the current version 1.8.x would soon be followed by version 10.
This approach, panned by many because it breaks the semantic significance of the sections of the version number, has been adopted by an increasing number of vendors including Mozilla (for Firefox).
Version number ordering systems
Version numbers very quickly evolve from simple integers (1, 2, ...) to rational numbers (2.08, 2.09, 2.10)
and then to non-numeric "numbers" such as 4:3.4.3-2. These complex version numbers are therefore better treated as character strings. Operating systems that include package management facilities (such as all non-trivial Linux or BSD distributions) will use a distribution-specific algorithm for comparing version numbers of different software packages. For example, the ordering algorithms of Red Hat and derived distributions differ to those of the Debian-like distributions.
As an example of surprising version number ordering implementation behavior, in Debian, leading zeroes are ignored in chunks, so that 5.0005 and 5.5 are considered as equal, and 5.5<5.0006. This can confuse users; string-matching tools may fail to find a given version number; and this can cause subtle bugs in package management if the programmers use string-indexed data structures such as version-number indexed hash tables.
To ease sorting, some software packages represent each component of the major.minor.release scheme with a fixed width. Perl represents its version numbers as a floating-point number; for example, Perl's 5.8.7 release can also be represented as 5.008007. This allows a theoretical version of 5.8.10 to be represented as 5.008010. Other software packages pack each segment into a fixed bit width; for example, on Microsoft Windows, version number 6.3.9600.16384 would be represented as hexadecimal 0x0006000325804000. The floating-point scheme breaks down if any segment of the version number exceeds 999; a packed-binary scheme employing 16 bits apiece breaks down after 65535.
Political and cultural significance of version numbers
Version 1.0 as a milestone
The free-software and open source communities tend to release software early and often. Initial versions are numbers less than 1, with these 0.x version used to convey that the software is incomplete and not reliable enough for general release or usable in its current state. Backward-incompatible changes are common with 0.x versions.
Version 1.0 is used as a major milestone, indicating that the software has at least all major features plus functions the developers wanted to get into that version, and is considered reliable enough for general release. A good example of this is the Linux kernel, which was first released as version 0.01 in 1991, and took until 1994 to reach version 1.0.0.
The developers of the arcade game emulator MAME do not ever intend to release a version 1.0 of the program because there will always be more arcade games to emulate and thus the project can never be truly completed. Accordingly, version 0.99 was followed by version 0.100.
Since the internet has become widespread, most commercial software vendors no longer follow the maxim that a major version should be "complete" and instead rely on patches with bugfixes to sort out the known issues which a solution has been found for and could be fixed.
Version numbers as marketing
A relatively common practice is to make major jumps in version numbers for marketing reasons. Sometimes software vendors just bypass the 1.0 release or quickly release a release with a subsequent version number because 1.0 software is considered by many customers too immature to trust with production deployments. For example, as in the case of dBase II, a product is launched with a version number that implies that it is more mature than it is.
Other times version numbers are increased to match those of competitors. This can be seen in many examples of product version numbering by Microsoft, America Online, Sun Solaris, Java Virtual Machine, SCO Unix, WordPerfect. Microsoft Access jumped from version 2.0 to version 7.0, to match the version number of Microsoft Word.
Microsoft has also been the target of 'catch-up' versioning, with the Netscape browsers skipping version 5 to 6, in line with Microsoft's Internet Explorer, but also because the Mozilla application suite inherited version 5 in its user agent string during pre-1.0 development and Netscape 6.x was built upon Mozilla's code base.
Another example of keeping up with competitors is when Slackware Linux jumped from version 4 to version 7 in 1999.
Superstition
The Office 2007 release of Microsoft Office had an internal version number of 12. The next version, Office 2010, has an internal version of 14, due to superstitions surrounding the number 13. Visual Studio 2013 is Version number 12.0 of the product, and the new version, Visual Studio 2015 has the Version number 14.0 for the same reasons.
Roxio Toast went from version 12 to version 14, likely in an effort to skip the superstitions surrounding the number 13.
Corel's WordPerfect Office, version 13 is marketed as "X3" (Roman number 10 and "3"). The procedure has continued into the next version, X4. The same has happened with Corel's Graphic Suite (i.e. CorelDRAW, Corel Photo-Paint) as well as its video editing software "Video Studio".
Sybase skipped major versions 13 and 14 in its Adaptive Server Enterprise relational database product, moving from 12.5 to 15.0.
ABBYY Lingvo Dictionary uses numbering 12, x3 (14), x5 (15).
SUSE Linux Enterprise skipped version 13 and 14 after version 12 and directly released SLES 15 in July 2018.
Geek culture
The SUSE Linux distribution started at version 4.2, to reference 42, "the answer to the ultimate question of life, the universe and everything" mentioned in Douglas Adams' The Hitchhiker's Guide to the Galaxy.
A Slackware Linux distribution was versioned 13.37, referencing leet.
Finnix skipped from version 93.0 to 100, partly to fulfill the assertion, "There Will Be No Finnix '95", a reference to Windows 95.
The Tagged Image File Format specification has used 42 as internal version number since its inception, its designers not expecting to alter it anymore during their (or its) lifetime since it would conflict with its development directives.
Overcoming perceived marketing difficulties
In the mid-1990s, the rapidly growing CMMS, Maximo, moved from Maximo Series 3 directly to Series 5, skipping Series 4 due to that number's perceived marketing difficulties in the Chinese market, where the number 4 is associated with "death" (see tetraphobia). This did not, however, stop Maximo Series 5 version 4.0 being released. (The "Series" versioning has since been dropped, effectively resetting version numbers after Series 5 version 1.0's release.)
Significance
In software engineering
Version numbers are used in practical terms by the consumer, or client, to identify or compare their copy of the software product against another copy, such as the newest version released by the developer. For the programmer or company, versioning is often used on a revision-by-revision basis, where individual parts of the software are compared and contrasted with newer or older revisions of those same parts, often in a collaborative version control system.
In the 21st century, more programmers started to use a formalized version policy, such as the semantic versioning policy. The purpose of such policies is to make it easier for other programmers to know when code changes are likely to break things they have written. Such policies are especially important for software libraries and frameworks, but may also be very useful to follow for command-line applications (which may be called from other applications) and indeed any other applications (which may be scripted and/or extended by third parties).
Versioning is also a required practice to enable many schemes of patching and upgrading software, especially to automatically decide what and where to upgrade to.
In technical support
Version numbers allow people providing support to ascertain exactly which code a user is running, so that they can rule out bugs that have already been fixed as a cause of an issue, and the like. This is especially important when a program has a substantial user community, especially when that community is large enough that the people providing technical support are not the people who wrote the code. The semantic meaning of version.revision.change style numbering is also important to information technology staff, who often use it to determine how much attention and research they need to pay to a new release before deploying it in their facility. As a rule of thumb, the bigger the changes, the larger the chances that something might break (although examining the Changelog, if any, may reveal only superficial or irrelevant changes). This is one reason for some of the distaste expressed in the "drop the major release" approach taken by Asterisk et alia: now, staff must (or at least should) do a full regression test for every update.
Use outside of software
Some computer file systems, such as the OpenVMS Filesystem, also keep versions for files. Versioning amongst documents is relatively similar to the routine used with computers and software engineering, where with each small change in the structure, contents, or conditions, the version number is incremented by 1, or a smaller or larger value, again depending on the personal preference of the author and the size or importance of changes made.
Software-style version numbers can be found in other media.
In some cases, the use is a direct analogy (for example: Jackass 2.5, a version of Jackass Number Two with additional special features; the second album by Garbage, titled Version 2.0; or Dungeons & Dragons 3.5, where the rules were revised from the third edition, but not so much as to be considered the fourth).
More often it's used to play on an association with high technology, and doesn't literally indicate a 'version' (e.g., Tron 2.0, a video game followup to the film Tron, or the television series The IT Crowd, which refers to the second season as Version 2.0). A particularly notable usage is Web 2.0, referring to websites from the early 2000s that emphasized user-generated content, usability and interoperability.
See also
Continuous Data Protection
Maintenance release
Product life cycle management
Release management
Release notes
Software engineering
Notes
References
External links
3 Effective Techniques For Software Versioning
Software Release Practice Howto
Software version numbering
Document Foundation release plan for LibreOffice, showing release trains
Version control
Software release |
1555862 | https://en.wikipedia.org/wiki/Computer%20Animation%20Production%20System | Computer Animation Production System | The Computer Animation Production System (CAPS) was a digital ink and paint system used in animated feature films, the first at a major studio, designed to replace the expensive process of transferring animated drawings to cels using India ink or xerographic technology, and painting the reverse sides of the cels with gouache paint. Using CAPS, enclosed areas and lines could be easily colored in the digital computer environment using an unlimited palette. Transparent shading, blended colors, and other sophisticated techniques could be extensively used that were not previously available.
The completed digital cels were composited over scanned background paintings and camera or pan movements were programmed into a computer exposure sheet simulating the actions of old style animation cameras. Additionally, complex multiplane shots giving a sense of depth were possible. Unlike the analog multiplane camera, the CAPS multiplane cameras were not limited by artwork size. Extensive camera movements never before seen were incorporated into the films. The final version of the sequence was composited and recorded onto film. Since the animation elements existed digitally, it was easy to integrate other types of film and video elements, including three-dimensional computer animation.
CAPS was a proprietary collection of software, scanning camera systems, servers, networked computer workstations, and custom desks developed by The Walt Disney Company together with Pixar in the late-1980s. It succeeded in reducing labor costs for ink and paint and post-production processes of traditionally animated feature films produced by Walt Disney Animation Studios. It also provided an entirely new palette of digital tools for the film-makers.
History and evolution of the CAPS project
The Computer Graphics Lab at the New York Institute of Technology (NYIT) developed a "scan and paint" system for cel animation in the late 1970s. It was used to produce a 22-minute computer-animated television show called Measure for Measure. Industry developments with computer systems led Marc Levoy of Cornell University and Hanna-Barbera Productions to develop a video animation system for cartoons in the early 1980s.
The first usage of the CAPS process was Mickey standing on Epcot's Spaceship Earth for "The Magical World of Disney" titles. The system's first feature film test was in the production of The Little Mermaid in 1989 where it was used in a single shot of the rainbow sequence at the end of the film. After Mermaid, films were made completely using CAPS; the first of these, The Rescuers Down Under, was the first 100% digital feature film ever produced. Later films, including Beauty and the Beast, Aladdin, The Lion King, and The Hunchback of Notre Dame took more advantage of CAPS’ 2D and 3D integration.
In the early days of CAPS, Disney did not discuss the system in public, being afraid that the magic would go away if people found out computers were involved. Computer Graphics World magazine, in 1994, was the first to have a look at the process.
Awards
In 1992, the team that developed CAPS won an Academy of Motion Picture Arts and Sciences Scientific and Engineering Award. They were:
Randy Cartwright (Disney)
David B. Coons (Disney)
Lemuel Davis (Disney)
Thomas Hahn (Pixar)
James Houston (Disney)
Mark Kimball (Disney)
Dylan W. Kohler (Disney)
Peter Nye (Pixar)
Michael Shantzis (Pixar)
David F. Wolf (Disney)
Walt Disney Feature Animation Department
Technical abilities
CAPS was capable of a high level of image quality using significantly slower computer systems than are available today. The final frames were rendered at a 2K digital film resolution (2048 pixels across at a 1.66 aspect ratio), and the artwork was scanned so that it always held 100% resolution in the final output, no matter how complex the camera motion in the shot. Using the Pixar image computer, images were stored at 48-bits per pixel. The compositing system allowed complex multi-layered shots that was used almost immediately in The Rescuers Down Under to create a 400-layer opening dolly shot. The DALS system made use of one of the first large-scale, custom RAID systems in the film industry.
Decline and legacy
Following the box office under-performance of films such as Treasure Planet in 2002 and Home on the Range, combined with the success of 3d animated movies from Pixar Animation Studios and competitor DreamWorks SKG, in 2004, Disney Feature Animation's management team (like DreamWorks Animation) were convinced that audiences much favored 3D computer animated features and closed down their traditional 2D animation department (until 2007 when Lasseter stepped in as the studio's new head of management and called for its reopening). The CAPS desks were removed and the custom automated scanning cameras were dismantled and scrapped. As of 2005, only one desk system remained (and that was only for reading the data for the films that were made with CAPS).
Since the merger with Pixar, as most of the CAPS system was discontinued, Disney's subsequent traditionally animated productions; How to Hook Up Your Home Theater (2007), The Princess and the Frog (2009), The Ballad of Nessie (2011), and the Winnie the Pooh (2011), were produced using Toon Boom Technologies computer software, which offered a more up-to-date digital animation system.
Projects produced using CAPS
Feature films
The Little Mermaid (1989) (ending scene)
The Rescuers Down Under (1990)
Beauty and the Beast (1991)
Aladdin (1992)
Hocus Pocus (1993)
The Nightmare Before Christmas (1993)
The Lion King (1994)
Pocahontas (1995)
The Hunchback of Notre Dame (1996)
Hercules (1997)
Mulan (1998)
Tarzan (1999)
Fantasia 2000 (1999)
The Emperor's New Groove (2000)
Atlantis: The Lost Empire (2001)
Lilo & Stitch (2002)
Treasure Planet (2002)
Brother Bear (2003)
Home on the Range (2004)
Short films
Off His Rockers (1992)
Trail Mix-Up (1993)
Runaway Brain (1995)
John Henry (2000)
Destino (2003)
Lorenzo (2004)
The Little Matchgirl (2006)
References
Film and video technology
Disney technology
Pixar
Animation techniques |
37513932 | https://en.wikipedia.org/wiki/Datalogics | Datalogics | Datalogics is a computer software company formed in 1967 and based in Chicago. The company licenses software development kits for working with PDF and other document file types. They have previously developed their own typesetting and database publishing software. Since 1996, Datalogics has also acted as a channel for several SDKs from Adobe Systems. These include the Adobe PDF Library, Adobe Experience Reader Extension, Adobe Content Server, Adobe InDesign Server, Adobe PDF Converter, Adobe PDF Print Engine and Adobe Reader Mobile SDK.
History
In 1967 Datalogics was founded as a general programming consulting company, developing one of the first computerized typesetting systems, and building editing workstations and software to drive them. In the 1980s the firm participated in the ISO committee to standardize SGML, the forerunner of XML and HTML, and applied this standard in the release of DL Pager, a high-volume SGML-based batch composition system, along with WriterStation, an SGML text editor. In 1987 the firm participated in the committee to develop the SGML portion of the CALS initiative.
In 1991 DL Composer, a Formatting Output Specification Instance (FOSI)-based batch composition system was released. Shortly after, Datalogics was acquired by Frame Technology and in 1995 Frame Technology (and Datalogics) was acquired by Adobe. In 1996 Adobe Ventures invested in Datalogics, a reincorporation under its original name as a privately held, independent entity.
In 1997 FrameLink, a FrameMaker plugin which connects to a Documentum content management repository was released. Soon following in 1998 DL Formatter, a Variable data printing application was introduced. In 1999 Adobe selected Datalogics to distribute Adobe PDF Library.
In 2004 Datalogics sold DL Formatter business to Printable Technologies Inc., and in 2010 Adobe selected Datalogics to distribute Adobe Reader Mobile SDK. Since then Datalogics has been working with Adobe, acting as a key channel for several of their PDF toolkits as well as developing their own in-house command-line applications for server side software.
Products
Datalogics licenses and supports toolkits for working with PDF and other document type files. These products include the following:
The Adobe PDF Library, an API for viewing, printing and manipulating PDF files. Built with the same core technology that Adobe uses to build Acrobat, The Adobe PDF Library can merge/split PDFs, extract trapped data, bulk render, add annotations, remove watermarks, convert files into searchable data, create high-volume print jobs and more;
Adobe Content Server, a server product that digitally protects PDF and reflowable EPUB eBooks for mobile devices and Adobe Digital Editions software;
Adobe Reader Mobile SDK, a collection of APIs for viewing EPUB and PDF eBooks on mobile devices;
Datalogics PDF Java Toolkit, formerly known as Adobe PDF Java Toolkit, is a Java PDF SDK that provides a broad range of functionality for working with PDF files. Embed the functionality of PDF documents and forms within your own custom applications to automate business workflows;
PDF Checker, a free Command-Line application for detecting and analyzing common PDF errors. As of 2020, PDF Optimizer is paired with it to streamline and programmatically drive processes based on conditions reported by PDF Checker. Additionally, PDF Optimizer can compress and archive PDFs;
PDF Alchemist provides PDF data extraction capabilities for text with OCR integration, images, complex tables & databases. It also converts PDFs to popular file types such as HTML, XML, JSON and EPUB;
PDF2IMG converts PDFs to image formats including JPG, PNG, BMP and others;
PDF2PRINT is a command line application that can be used to print PDFs at scale;
PDF Forms Flattener is a standalone, command line interface tool. It can flatten XFA and AcroForm documents;
ACS Cloud Service a cloud storage service to synchronize content, bookmarks and highlights across multiple devices for Adobe Content Server;
Legacy Software
Datalogics’ old suite of products that is no longer being developed or supported. These products include the following:
DL Reader, a customizable eReader app for iOS, Android and Windows;
PDF WebAPI, a RESTful web services API for manipulating PDF files;
DL Pager, a batch-pagination SGML-based typesetting engine and database publishing application;
DL Composer, a FOSI-based composition engine
References
External links
Official Datalogics, Inc. website
Software companies based in Illinois
Software companies established in 1967
Technology companies established in 1967
1967 establishments in Illinois
Software companies of the United States |
1072339 | https://en.wikipedia.org/wiki/Routing%20%28disambiguation%29 | Routing (disambiguation) | Routing is the process of path selection in a network, such as a computer network or transportation network.
Routing may also refer to:
Route of administration, the path by which a drug, fluid, poison or other substance is brought into contact with the body
Hollowing out an area of wood or plastic using a router (woodworking)
National Routeing Guide, a guide to trains over the United Kingdom's rail network
Routing (hydrology), a technique used to predict the changes in shape of a hydrograph
ABA routing transit number, a bank code used in the United States
Routing number (Canada)
In electronics and computer technologies :
Routing (electronic design automation), a step in the design of printed circuit boards and integrated circuits
The packet forwarding algorithm in a computer network
The role of a router hardware in a computer network
See also
Forwarding (disambiguation)
Route (disambiguation)
Router (disambiguation)
Rout
Vehicle routing problem |
317897 | https://en.wikipedia.org/wiki/Aberystwyth%20University | Aberystwyth University | Aberystwyth University () is a public research university in Aberystwyth, Wales. Aberystwyth was a founding member institution of the former federal University of Wales. The university has over 8,000 students studying across 3 academic faculties and 17 departments.
Founded in 1872 as University College Wales, Aberystwyth, it became a founder member of the University of Wales in 1894, and changed its name to the University College of Wales, Aberystwyth. In the mid-1990s, the university again changed its name to become the University of Wales, Aberystwyth. On 1 September 2007, the University of Wales ceased to be a federal university and Aberystwyth University became independent again.
In 2019, it became the first university to be named "University of the year for teaching quality" by The Times/Sunday Times Good University Guide for two consecutive years. It is the first university in the world to be awarded Plastic Free University status (for single-use plastic items).
History
In the middle of the 19th century, eminent Welsh people were advocating the establishment of a university in the Principality, one of these, Thomas Nicholas, whose book Middle and High Class Schools, and University Education for Wales (1863) is said to have "exerted great influence on educated Welshmen".
Funded through public and private subscriptions, and with five regional committees (London, Manchester, Liverpool, North and South Wales) guaranteeing funds for the first three years' running costs, the university opened in October 1872 with 26 students. Thomas Charles Edwards was the Principal. In October 1875, chapels in Wales raised the next tranche of funds from over 70,000 contributors. Until 1893, when the college joined the University of Wales as a founder member, students applying to Aberystwyth sat the University of London's entrance exams. Women were admitted in 1884.
In 1885, a fire damaged what is now known as the Old College, Aberystwyth, and in 1897 the first 14 acres of what would become the main Penglais campus were purchased. Incorporated by Royal Charter in 1893, the university installed the Prince of Wales as Chancellor in 1896, the same year it awarded an honorary degree to the British Prime Minister William Gladstone.
The university's coat of arms dates from the 1880s. The shield features two red dragons to symbolise Wales, and an open book to symbolise learning. The crest, an eagle or phoenix above a flaming tower, may signify the college's rebirth after the 1885 fire. The motto is Nid Byd, Byd Heb Wybodaeth (a world without knowledge is no world at all).
In the early 1900s the university added courses that included Law, Applied Mathematics, Pure Mathematics, and Botany. The Department for International Politics, which Aberystwyth says is the oldest such department in the world, was founded in 1919. By 1977, the university's staff included eight Fellows of the Royal Society, such as Gwendolen Rees, the first Welsh woman to be elected an FRS.
The Department of Sports and Exercise Science was established in 2000. Joint honours Psychology degrees were introduced in September 2007, and single honours Psychology in 2009.
The chancellor of the university is The Lord Thomas of Cwmgiedd, who took up the position in January 2018. The visitor of the university is an appointment made by the Privy Council, under the Royal Charter of the university. Since July 2014, the holder of this office is Mr Justice Sir Roderick Evans QC.
In 2011, the university appointed a new vice chancellor under whom the academic departments were restructured as larger subject-themed institutes.
Organisation and administration
Departments and Faculties
The university's academic departments, as well as the Arts Centre, International English Centre, and Music Centre are organised in three faculties:
Faculty of Arts and Social Sciences
School of Art
Arts Centre
School of Education
Department of English and Creative Writing
Department of History and Welsh History
International English Centre
Department of International Politics
Department of Law and Criminology
Department of Modern Languages
Music Centre
Department of Theatre, Film and Television Studies
Department of Welsh and Celtic Studies
Faculty of Business and Physical Sciences
Aberystwyth Business School
Department of Computer Science
Department of Information Studies
Department of Mathematics
Department of Physics
Faculty of Earth and Life Sciences
Institute of Biological, Environmental and Rural Sciences
Department of Geography and Earth Sciences
Department of Psychology
Institute of Biological, Environmental and Rural Sciences
The Institute of Biological, Environmental and Rural Sciences (IBERS) is a research and teaching centre which brings together staff from the Institutes of Rural Sciences and Biological Sciences and the Institute of Grassland and Environmental Research (IGER). Around 360 research, teaching and support staff conduct basic, strategic and applied research in biology.
The institute is located in two areas; one at the main teaching Penglais campus and another rural research hub at the Gogerddan campus.
Aberystwyth Business School
In 1998. the Department of Economics (founded in 1912), the Department of Accounting and Finance (founded in 1979) and the Centre for Business Studies merged to create the School of Management and Business. In 2013, the School joined the Department of Information Studies and the Department of Law and Criminology at a new campus at Llanbadarn Fawr. The School was shortlisted for "Business School of the Year" in the Times Higher Education Awards (2014). In 2016 the institute, minus the Department of Information Studies, was renamed the Institute of Business and Law, the remaining departments being renamed Aberystwyth Business School and Aberystwyth Law School.
Department of Computer Science
The Department of Computer Science (founded in 1970), conducts research in automated reasoning, computational biology, vision graphics and visualisation, and intelligent robotics.
AberMUD, the first popular internet-based MUD, was written in the department by then-student Alan Cox. Jan Pinkava, another graduate, won an Oscar for his short animated film Geri's Game. Students in the department were also involved in the creation of the award-winning service robot librarian named Hugh (robot) and Kar-go, the autonomous delivery vehicle.
Department of Geography and Earth Sciences
The Department of Geography and Earth Sciences (IGES) was formed, in 1989, from the former Departments of Geography (established in 1918) and Geology. It houses the E. G. Bowen map library, containing 80,000 maps and 500 atlases.
Department of Information Studies
The College of Librarianship Wales (CLW) was established at Llanbadarn Fawr in 1964, in response to a recommendation for the training of bilingual librarians that was made in the Bourdillon Report on Standards of public library service in England (HMSO, 1962). The college grew rapidly, developing close links to the Welsh speaking and professional communities, acquiring an international reputation and pioneering flexible and distance learning courses. It claimed to be Europe's largest institution for training librarians. The independent college merged with the university (in August 1989) and the department moved to the Penglais campus a quarter of a century later. Following the merger, the new department took over responsibility for existing offerings in archives administration and modern records management.
Department of International Politics
The Department of International Politics was founded, shortly after World War I (in 1919), with the stated purpose of furthering political understanding of the world in the hope of avoiding such conflicts in the future. This goal led to the creation of the Woodrow Wilson Chair of International Politics. The department has over 700 students from 40 countries studying at undergraduate, masters and PhD levels. It achieved a 95% score for student satisfaction in the 2016 National Student Survey, placing it as the highest-ranking politics department in Wales and within the UK's top ten.
The department has hosted various notable academic staff in the field including E. H. Carr, Leopold Kohr, Andrew Linklater, Ken Booth, Steve Smith, Michael Cox, Michael MccGwire, Jenny Edkins and Colin J. McInnes.
Department of Law and Criminology
The Department of Law and Criminology (founded in 1901) is housed in the Hugh Owen Building on the Penglais campus, and includes the Centre for Welsh Legal Affairs, a specialist research centre. All academic staff are engaged in research, and the International Journal of Biosciences and the Law and the Cambrian Law Review are edited in the department. In 2013, the department joined the Department of Information Studies and the School of Management and Business at a new campus at Llanbadarn Fawr, as part of a newly created Institute of Management, Law and Information Studies. As of September 2018, the department has since relocated back to the Hugh Owen Building, based in the Penglais campus, and its name changed from Aberystwyth Law School to the Department of Law and Criminology.
The Guardian University Guide 2018 currently ranks Law Department at 69th in the UK, and "The Times" Higher Education Guide ranks it as 300th globally.
Department of Modern Languages
Aberystwyth has taught modern languages since 1874. French, German, Italian and Spanish courses are taught at both beginners' and advanced levels, in a research-active academic environment. One of its research projects is the Anglo-Norman Dictionary, based in Aberystwyth since 2001 and available online since 2005.
Department of Physics
Physics was first taught at Aberystwyth as part of Natural Philosophy, Astronomy, and Mathematics under N. R. Grimley, soon after the foundation of the University College. It became a department in 1877, under the leadership of F. W. Rudler. The department was located in the south wing of what is now the Old College, but later relocated to the Physics Building on the Penglais Campus. The first chair in Physics was offered to D. E. Jones in 1885. Prior to World War I, much of the early research in the department was undertaken in Germany. Early research in the 1900s was concerned with electrical conductivity and quantum theory, later moving into thermal conductivity and acoustics. In 1931, the department hosted the Faraday Centenary Exhibition. E. J. Williams was appointed Chair of Physics in 1938 where he continued his research into sub-atomic particles using a cloud chamber. Following World War II, research was concerned with mechanical and nuclear physics, later moving into the fields of air density, experimental rocket launching equipment, and radar.
Department of Psychology
In 2007, Aberystwyth established the subject as a "Centre for Applied Psychology" within the Department of International Politics. By 2011, Psychology had moved into their current premises in Penbryn 5 on the Penglais Campus. The department is home to over 300 undergraduate students – with degrees accredited by the British Psychological Society.
Campuses
Penglais
The main campus of the university is situated on Penglais Hill, overlooking the town of Aberystwyth and Cardigan Bay, and comprises most of the university buildings, Arts Centre, Students' Union, and many of the student residences. Just below Penglais Campus is the National Library of Wales, one of Britain's five legal deposit libraries. The landscaping of the Penglais Campus is historically significant and is listed. The CADW listing states,
Llanbadarn
The Llanbadarn Centre is located approximately one mile to the east of the Penglais Campus, near Llanbadarn Fawr, overlooking the town and Cardigan Bay to the west, with the backdrop of the Cambrian Mountains to the east. Llanbadarn Centre hosted Aberystwyth Law School and Aberystwyth Business School, which together formed the Institute of Business and Law. The Department of Information Studies is also based there. Additionally, the Llanbadarn Campus is the site of the Aberystwyth branch of Coleg Ceredigion (a further education college, and not part of the university).
Goggerddan
At Gogerddan, on the outskirts of town is located the university's major centre for research in land based sciences and the main centre for the Institute of Biological, Environmental and Rural Science.
School of Art, Edward Davies Building
The School of Art is located between the Penglais Campus and the centre of Aberystwyth, in what was originally the Edward Davies Chemical Laboratory. A listed building, the Edward Davies Building is one of the finest examples of architecture in Aberystwyth.
Old College
The site of the original university is the Old College, currently the subject of the "New Life for Old College" project which aims to transform it into an integrated centre of heritage, culture, learning and knowledge exchange. The university opened an international campus in Mauritius in 2016 operating as Aberystwyth University (Mauritian Branch Campus) and registered with the Tertiary Education Commission of Mauritius, but closed it to new enrolments two years later due to low enrolment numbers.
Student residences
Most of the student residences are on campus, with the rest in walking distance of the campus and Aberystwyth town centre. Accommodation ranges from "traditional" catered residences to en-suite self-catered accommodation, and from budget rooms to more luxurious studio apartments. All have wired access to the university's computer network and a support network of residential tutors.
Penglais Campus
Cwrt Mawr (self-catered flats, single rooms, capacity 503)
(Welsh speaking traditional catered hall, refurbished in 2020, capacity 200)
Penbryn (Welsh-speaking traditional catered hall, capacity 350)
Rosser (self-catered en-suite flats, capacity 336),
Rosser G (postgraduate flats following 2011 expansion to Rosser, capacity 60)
Trefloyne (self-catered flats, capacity 147)
Pentre Jane Morgan (Student Village)
Almost 200 individual houses arranged in closes and cul-de-sacs. Each house typically accommodate 5 or 6 students. (total capacity 1003)
Fferm Penglais Student Residence
Purpose-built student accommodation with studio apartments and en-suite bedrooms (total capacity 1000). An area of accommodation within the Fferm Penglais Student Residence is set aside for students who are Welsh learners or fluent Welsh speakers, and wish to live in a Welsh speaking environment.
Town accommodation
Seafront Residences (self-catered flats located on the seafront and Queen's Road, overall capacity 361). The original Seafront residences, Plyn' and Caerleon, were destroyed by fire in 1998.
Seafront residences include Aberglasney, Balmoral, Blaenwern, Caerleon, Carpenter, Pumlumon, Ty Glyndwr, and Ty Gwerin Halls.
The university also owns several houses, such as Penglais Farmhouse (Adjacent to Pentre Jane Morgan) and flats in Waun Fawr, which are let on an Assured Shorthold Tenure to students with families. Disabled access rooms are available within the existing student village.
Reputation and academic profile
Aberystwyth University is placed in the UK's top 50 universities in the main national rankings.
It is ranked 48th for 132 UK university rankings in The Times/Sunday Times Good University Guide for 2019 and the first university to be given the prestigious award "University of the year for teaching quality" for two consecutive years (2018 and 2019).
The Times Higher Education World University Rankings placed it in the 301—350 group for 800 university rankings, compared with 351—400 the previous year, and the QS World University Rankings placed it at the 432nd position for 2019, compared with 481—490 of the previous year. In 2015, UK employers from "predominantly business, IT and engineering sectors" listed Aberystwyth equal 49th in their 62-place employability rankings for UK graduates, according to a Times Higher Education report.
Aberystwyth University was rated in the top ten of UK higher education institutions for overall student satisfaction in the 2016 National Student Survey (NSS).
Aberystwyth University was shortlisted in four categories in the Times Higher Education Leadership and Management Awards (THELMAs) (2015).
Aberystwyth University has been awarded the Silver Award under the Corporate Health Standard (CHS), the quality mark for workplace health promotion run by Welsh Government.
The university has been awarded an Athena SWAN Charter Award, recognising commitment to advancing women's careers in science, technology, engineering, maths and medicine (STEMM) in higher education and research.
In 2007, the university came under criticism for its record on sustainability, ranking 97th out of 106 UK higher education institutions in that year's Green League table. In 2012 the university was listed in the table's "Failed, no award" section, ranking equal 132nd out of 145. In 2013 it ranked equal 135th out of 143, and was listed again as "Failed, no award".
Following the university's initiatives to address sustainability, it received an EcoCampus Silver Phase award in October 2014.
In October 2015, the university's Penglais Campus became the first University campus in Wales to achieve the Green Flag Award. The Green Flag Award is a UK-wide partnership, delivered in Wales by Keep Wales Tidy with support from Natural Resources Wales, and is the mark of a high quality park or green space.
In 2013, the University and College Union alleged bullying behaviour by Aberystwyth University managers, and said staff were fearful for their jobs. University president Sir Emyr Jones Parry said in a BBC radio interview, "I don't believe the views set out are representative and I don't recognise the picture." He also said, "Due process is rigorously applied in Aberystwyth." Economist John Cable resigned his emeritus professorship, describing the university's management as "disproportionate, aggressive and confrontational". The singer Peter Karrie resigned his honorary fellowship in protest, he said, at the apparent determination to "ruin one of the finest arts centres in the country", and because he was "unable to support any regime that can treat their staff in such a cruel and appalling manner."
Officers and Academics
Presidents and Chancellors
1872–95 Henry Austin Bruce, 1st Lord Aberdare
1895–1913 Stuart, Lord Rendel
1913–26 Sir John Williams, 1st Bt
1926–44 Edmund Davies, Lord Edmund-Davies
1944–54 Thomas Jones (T. J.)
1955–64 Sir David Hughes Parry
1964–76 Sir Ben Bowen Thomas
1977–85 Cledwyn Hughes, Lord Cledwyn of Penrhos
1985–97 Melvyn Rosser
1997–2007 Elystan Morgan, Lord Elystan-Morgan
2007–17 Sir Emyr Jones Parry
2018–present John, Lord Thomas of Cwmgiedd
Principals and Vice-Chancellors
1872–91 Thomas Charles Edwards
1891–1919 Thomas Francis Roberts
1919–26 John Humphreys Davies
1927–34 Sir Henry Stuart-Jones
1934–52 Ifor Leslie Evans
1953–57 Goronwy Rees
1958–69 Sir Thomas Parry
1969–79 Sir Goronwy Daniel
1979–89 Gareth Owen
1989–94 Kenneth, Lord Morgan
1994–2004 Derec Llwyd Morgan
2004–11 Noel Lloyd
2011–16 April McMahon
2016–17 John Grattan (acting)
2016–present Elizabeth Treasure
Academics
Henry Bird, Lecturer in Art History (1936–41)
Ken Booth, Professor of International Politics
Edward Carr, Historian, Woodrow Wilson Professor of International Politics
Sir Henry Walford Davies, Master of the King's Music
John Davies, Welsh historian
Hannah Dee, Lecturer in Computer Science
R. Geraint Gruffydd, Chair of Welsh Language and Literature (1970–79)
David Russell Hulme, Director of Music (1992–), conductor, musicologist
Robert Maynard Jones, Chair of Welsh Language (1980)
D. Gwenallt Jones, poet, Welsh Lecturer
Leopold Kohr, Economist, Political Scientist
Dennis Lindley, Professor of Statistics (1960–67)
David John de Lloyd, Gregynog Professor of Music, composer
Alec Muffett, Systems Programmer (1988–92)
Lily Newton, Professor of Botany
Ian Parrott, Gregynog Professor of Music (1950–83), composer, musicologist
Joseph Parry, Professor of Music, composer, conductor
Sir Thomas Herbert Parry-Williams, poet, Professor of Welsh (1920–52)
F. Gwendolen Rees FRS Professor of Zoology
Huw Rees FRS (1923–2009), Geneticist
William Rubinstein, Professor of History
Marie Breen Smyth, Reader in Political Violence, International Politics
Richard Marggraf Turley, Professor of Engagement with the Public Imagination
Dame Marjorie Williamson, Principal, Royal Holloway, London (1962–73)
Richard Henry Yapp, botanist
Alumni
Royalty
Charles, Prince of Wales
Tunku Muhriz Ibni Almarhum Tunku Munawir, 11th Yang Di Pertuan Besar (Grand Ruler) of Negeri Sembilan, Malaysia (2008–present)
Tunku Naquiyuddin, Tunku Laxamana (Regent) of Negeri Sembilan, Malaysia (1994–99)
Ahmad Tejan Kabbah, 3rd President of Sierra Leone (1996–7)
Academia
E. G. Bowen, Geographer
Sir Edward Collingwood, mathematician, scientist
Alan Cox, Programmer (major contributor to the Linux kernel, 1980s)
D. J. Davies, economist, socialist, Plaid Cymru activist
Natasha Devon, writer, mental health activist
Andrew Gordon naval historian
Sir Deian Hopkin, historian
David Russell Hulme, Director of Music (from 1992), conductor
Rhiannon Ifans, Welsh and Celtic medieval specialist, author
David Gwilym James Vice-Chancellor, University of Southampton 1952–65
Emrys Jones, Professor of Geography, London School of Economics
T. Harri Jones, poet
Roy Kift, dramatist, writer
Mary King, political scientist
Michael MccGwire, international relations specialist, Naval Commander
Twm Morys, poet
Tavi Murray, glaciologist, Polar Medallist
Ernest Charles Nelson, botanist
David Hughes Parry, Vice-Chancellor, University of London (1945–48)
T. H. Parry-Williams, poet, author, academic
Frederick Soddy, Nobel Prize Winner in Chemistry (1921)
Vaughan Southgate OBE DL PPFLS FRSM FRSB FZS (born 1944), parasitologist
Sir John Meurig Thomas FRS, chemist, professor, author
Paul Thomas, founding Vice-Chancellor, University of the Sunshine Coast
Sir Nigel Thrift, Geographer, Vice Chancellor, University of Warwick
David John Williams, writer
Sir Glanmor Williams, historian
John Tudno Williams, theologian
Waldo Williams, poet
William Richard Williams, theologian
Christine James, first female Archdruid of Wales
Law
Salleh Abas, Lord President of the Federal Court, Malaysia (1984–88)
Belinda Ang, Judge, Supreme Court of Singapore (2003–)
Sir Alun Talfan Davies, judge, publisher
Sir Ellis Ellis-Griffith, 1st Bt, barrister, Liberal politician
Iris de Freitas Brazao, first female prosecuting lawyer in the Caribbean
Sir Samuel Thomas Evans, barrister, judge, Liberal politician
Elwyn, Lord Elwyn-Jones, Lord Chancellor (1974–79)
John, Lord Morris of Aberavon, Attorney General (1997–99)
Civil servants
Timothy Brain, Chief Constable for Gloucestershire (2001–10)
Sir Goronwy Daniel, civil servant, academic
Politics
Joe Borg, European Union Oceans and Fisheries Commissioner (2004–10)
Roderic Bowen, Liberal MP, Deputy Commons Speaker
Nicholas, Lord Bourne of Aberystwyth, Welsh Conservative Leader (1999–2011)
Stephen Clackson Independent councillor on Orkney Islands Council
Rehman Chishti, Conservative MP (2010–), Special Envoy (2019–20)
David Davies, 1st Baron Davies, Liberal politician, philanthropist
Glyn Davies, Conservative MP
Gwilym Prys Davies, Lord Prys-Davies, Labour peer (1982–2015)
Gwynfor Evans, first Plaid Cymru MP
Steve Gilbert, Liberal Democrat MP (2010–15)
Siân Gwenllian, Plaid Cymru AM
Neil Hamilton, Conservative MP and AM, barrister
Sylvia Hermon, Ulster Unionist politician
Emlyn Hooson, Baron Hooson, Liberal politician
Cledwyn Hughes, Baron Cledwyn of Penrhos, Labour politician
Hishammuddin Hussein, Defence Minister, Malaysia, (2021–)
Dan Jarvis, Labour MP
Bethan Jenkins, Plaid Cymru AM for South Wales West
Carwyn Jones, First Minister of Wales (2009–18), AM for Bridgend
Gerry MacLochlainn Sinn Féin politician
John Morris, Baron Morris of Aberavon, Labour politician
Elystan Morgan, Baron Elystan-Morgan, Labour MP
Roland Moyle, Labour MP, Parliamentary Private Secretary to Clement Attlee
Will Quince, Conservative MP
Dan Rogerson, Liberal Democrat MP
Liz Saville Roberts, Plaid Cymru MP, Plaid Cymru Leader (2017–)
Molly Scott Cato, Green Party MEP
Ahmed Shaheed, Minister for Foreign Affairs, Maldives
Virginijus Sinkevičius, European Union Environment Commissioner (2019–)
Bob Stewart, Conservative MP
Gareth Thomas, Labour MP
Gareth Thomas, Labour MP
Mark Williams, Liberal Democrat MP, Welsh LD Leader (2016–17)
Mike Wood, Conservative MP
Steven Woolfe, UK Independence Party MEP
Business
Lance Batchelor, CEO, Domino's Pizza and Saga
Geoff Drabble, CEO, Ashtead
Belinda Earl, CEO, Debenhams and Jaeger
David Prosser, CEO, Legal & General
Tom Singh, owner and CEO, New Look
Sports
Cath Bishop, professional rower, civil servant
John Dawes, Rugby player, Captain of Wales and British Lions
Carwyn James, Wales and British and Irish Lions Rugby Coach (1949?–51)
Leigh Richmond Roose, International footballer
Berwyn Price, Gold Medal Commonwealth Games (1978)
Angela Tooby, Silver Medal, World Cross-Country Championships (1988)
Arts and entertainment
Dorothy Bonarjee, Indian poet, artist
Neil Brand, writer, composer, silent film accompanist
Seth Clabough, American novelist, academic
Shân Cothi, operatic singer, actress
Jane Green, author
Sarah Hall, writer, poet
David Russell Hulme, conductor, musicologist
Aneirin Hughes, actor
Emrys James, actor
Eveline Annie Jenkins (1893–1976), botanical artist
Alex Jones, presenter, BBC One TV Programme, The One Show (2010–)
Melih Kibar, Turkish composer
Alun Lewis, Second World War writer, poet
Caryl Lewis, novelist
Rick Lloyd, musician (Y Blew, Flying Pickets)
Hayley Long, fiction writer
Sharon Maguire, film director, Bridget Jones's Diary
Matt McCooey, actor
Alan Mehdizadeh, actor, Billy Elliot the Musical
Robert Minhinnick, poet, essayist, novelist, translator
Amy Parry-Williams (1910–1988), singer, writer
Esther Pilkington, performance artist
Jan Pinkava, Oscar-winning animated film director
Rachel Roberts, actress
Lisa Surihani, Malaysian actress
Richard Roberts, theologian, pacifist
James Tear, teacher
Mirain Iwerydd James, Radio Presenter (Radio Cymru 2) (2021-)
Gallery
See also
Aberystwyth University Students' Union
Thomas Parry Library
List of universities in Wales
List of universities in the United Kingdom
Aberystwyth Arts Centre
Further reading
Iwan Morgan (ed.), The College by the Sea (Aberystwyth, 1928)
E.L. Ellis, The University College of Wales, Aberystwyth: 1872–1972, University of Wales Press (2004)
Ben Bowen Thomas, "Aber" 1872–1972 (University of Wales Press, 1972)
J Roger Webster, Old College Aberystwyth: The Evolution of a High Victorian Building (University of Wales Press, 1995)
Emrys Wynn Jones, Fair may your future be: the story of the Aberystwyth Old Students' Association 1892–1992 (Aberystwyth Old Students' Association, 1992)
References
External links
Aberystwyth University – University official website
Aberystwyth Students' Union – Students' Union website
Aberystwyth Old Students' Association – Alumni Association website
Percy Thomas buildings
Aberystwyth
1872 establishments in Wales
Educational institutions established in 1872
Buildings and structures in Aberystwyth
Universities UK |
5930831 | https://en.wikipedia.org/wiki/Rio%20Receiver | Rio Receiver | The Rio Receiver was a home stereo device for playing MP3 files stored on your computer's hard drive over an Ethernet or HomePNA network. It was later rebranded and sold as the Dell Digital Audio Receiver.
With a design derived from the existing Linux-based Empeg Car, it became popular among the Linux hacking community.
The hardware consisted of a Cirrus Logic 7212 CPU (ARM720T at 74 MHz), 1Mx32 (4 MB) of EDO RAM, and either 512k×16 or 256k×16 (1 MB or 0.5 MB) of NOR flash used to boot. Audio output used a Burr-Brown PCM1716 DAC that drove line outputs, the headphone jack, and a Tripath class-D digital audio amplifier for speakers. Network connections were via either a Cirrus logic 8900A (10MBit Ethernet) or a Broadcom HomePNA 10 Mbit/s chipset; if no Ethernet link was seen at boot time, the unit tried HomePNA. The user interface was a 128x64 pixel monochrome LCD with an EL backlight, a rotary control with a push button, several buttons and IR remote control.
The unit booted via a 2.2 linux kernel in flash which used DHCP and SSDP to discover an NFS server from which it loaded a new kernel. The second kernel then mounted a root filesystem over NFS containing a small set of standard POSIX tools and an application for selecting and playing music over the network, which was served using HTTP by the Audio Receiver Manager software running on a Windows PC. Although the music player and the Audio Receiver Manager and Broadcom HomePNA kernel driver module were proprietary software, the kernel and other tools were open source. The two-step kernel boot process allowed rapid development of changes to the kernel allowing units to run new kernels by simply power cycling them; the use of standard protocols meant a variety of replacement software components could be developed independently.
External links
RRR Project - Replacement Client Application by Reza Naima
RioPlay - Open source project to replace the client and server side software
SlimRio - Open source client software to interoperate with SlimServer.
Jreceiver - Open source host software to interoperate with various client modules for the rio receiver.
MediaNet - Replacement client and server side software with FLAC, OGG and shoutcast support.
YARRS - Yet Another Rio Receiver Server. Unix-based, free-software replacement server.
Linux-based devices |
18933600 | https://en.wikipedia.org/wiki/File%20format | File format | A file format is a standard way that information is encoded for storage in a computer file. It specifies how bits are used to encode information in a digital storage medium. File formats may be either proprietary or free and may be either unpublished or open.
Some file formats are designed for very particular types of data: PNG files, for example, store bitmapped images using lossless data compression. Other file formats, however, are designed for storage of several different types of data: the Ogg format can act as a container for different types of multimedia including any combination of audio and video, with or without text (such as subtitles), and metadata. A text file can contain any stream of characters, including possible control characters, and is encoded in one of various character encoding schemes. Some file formats, such as HTML, scalable vector graphics, and the source code of computer software are text files with defined syntaxes that allow them to be used for specific purposes.
Specifications
File formats often have a published specification describing the encoding method and enabling testing of program intended functionality. Not all formats have freely available specification documents, partly because some developers view their specification documents as trade secrets, and partly because other developers never author a formal specification document, letting precedent set by other already existing programs that use the format define the format via how these existing programs use it.
If the developer of a format doesn't publish free specifications, another developer looking to utilize that kind of file must either reverse engineer the file to find out how to read it or acquire the specification document from the format's developers for a fee and by signing a non-disclosure agreement. The latter approach is possible only when a formal specification document exists. Both strategies require significant time, money, or both; therefore, file formats with publicly available specifications tend to be supported by more programs.
Patents
Patent law, rather than copyright, is more often used to protect a file format. Although patents for file formats are not directly permitted under US law, some formats encode data using patented algorithms. For example, using compression with the GIF file format requires the use of a patented algorithm, and though the patent owner did not initially enforce their patent, they later began collecting royalty fees. This has resulted in a significant decrease in the use of GIFs, and is partly responsible for the development of the alternative PNG format. However, the GIF patent expired in the US in mid-2003, and worldwide in mid-2004.
Identifying file type
Different operating systems have traditionally taken different approaches to determining a particular file's format, with each approach having its own advantages and disadvantages. Most modern operating systems and individual applications need to use all of the following approaches to read "foreign" file formats, if not work with them completely.
Filename extension
One popular method used by many operating systems, including Windows, macOS, CP/M, DOS, VMS and VM/CMS is to determine the format of a file based on the end of its name, more specifically the letters following the final period. This portion of the filename is known as the filename extension. For example, HTML documents are identified by names that end with (or ), and GIF images by . In the original FAT file system, file names were limited to an eight-character identifier and a three-character extension, known as an 8.3 filename. There are a limited number of three-letter extensions, which can cause a given extension to be used by more than one program. Many formats still use three-character extensions even though modern operating systems and application programs no longer have this limitation. Since there is no standard list of extensions, more than one format can use the same extension, which can confuse both the operating system and users.
One artifact of this approach is that the system can easily be tricked into treating a file as a different format simply by renaming it—an HTML file can, for instance, be easily treated as plain text by renaming it from to . Although this strategy was useful to expert users who could easily understand and manipulate this information, it was often confusing to less technical users, who could accidentally make a file unusable (or "lose" it) by renaming it incorrectly.
This led most versions of Windows and Mac OS to hide the extension when listing files. This prevents the user from accidentally changing the file type, and allows expert users to turn this feature off and display the extensions.
Hiding the extension, however, can create the appearance of two or more identical filenames in the same folder. For example, a company logo may be needed both in format (for publishing) and format (for web sites). With the extensions visible, these would appear as the unique filenames: "" and "". On the other hand, hiding the extensions would make both appear as "", which can lead to confusion.
Hiding extensions can also pose a security risk. For example, a malicious user could create an executable program with an innocent name such as "". The "" would be hidden and an unsuspecting user would see "", which would appear to be a JPEG image, usually unable to harm the machine. However, the operating system would still see the "" extension and run the program, which would then be able to cause harm to the computer. The same is true with files with only one extension: as it is not shown to the user, no information about the file can be deduced without explicitly investigating the file. To further trick users, it is possible to store an icon inside the program, in which case some operating systems' icon assignment for the executable file () would be overridden with an icon commonly used to represent JPEG images, making the program look like an image. Extensions can also be spoofed: some Microsoft Word macro viruses create a Word file in template format and save it with a extension. Since Word generally ignores extensions and looks at the format of the file, these would open as templates, execute, and spread the virus. This represents a practical problem for Windows systems where extension-hiding is turned on by default.
Internal metadata
A second way to identify a file format is to use information regarding the format stored inside the file itself, either information meant for this purpose or binary strings that happen to always be in specific locations in files of some formats. Since the easiest place to locate them is at the beginning, such area is usually called a file header when it is greater than a few bytes, or a magic number if it is just a few bytes long.
File header
The metadata contained in a file header are usually stored at the start of the file, but might be present in other areas too, often including the end, depending on the file format or the type of data contained. Character-based (text) files usually have character-based headers, whereas binary formats usually have binary headers, although this is not a rule. Text-based file headers usually take up more space, but being human-readable, they can easily be examined by using simple software such as a text editor or a hexadecimal editor.
As well as identifying the file format, file headers may contain metadata about the file and its contents. For example, most image files store information about image format, size, resolution and color space, and optionally authoring information such as who made the image, when and where it was made, what camera model and photographic settings were used (Exif), and so on. Such metadata may be used by software reading or interpreting the file during the loading process and afterwards.
File headers may be used by an operating system to quickly gather information about a file without loading it all into memory, but doing so uses more of a computer's resources than reading directly from the directory information. For instance, when a graphic file manager has to display the contents of a folder, it must read the headers of many files before it can display the appropriate icons, but these will be located in different places on the storage medium thus taking longer to access. A folder containing many files with complex metadata such as thumbnail information may require considerable time before it can be displayed.
If a header is binary hard-coded such that the header itself needs complex interpretation in order to be recognized, especially for metadata content protection's sake, there is a risk that the file format can be misinterpreted. It may even have been badly written at the source. This can result in corrupt metadata which, in extremely bad cases, might even render the file unreadable.
A more complex example of file headers are those used for wrapper (or container) file formats.
Magic number
One way to incorporate file type metadata, often associated with Unix and its derivatives, is to just store a "magic number" inside the file itself. Originally, this term was used for a specific set of 2-byte identifiers at the beginnings of files, but since any binary sequence can be regarded as a number, any feature of a file format which uniquely distinguishes it can be used for identification. GIF images, for instance, always begin with the ASCII representation of either or , depending upon the standard to which they adhere. Many file types, especially plain-text files, are harder to spot by this method. HTML files, for example, might begin with the string (which is not case sensitive), or an appropriate document type definition that starts with , or, for XHTML, the XML identifier, which begins with . The files can also begin with HTML comments, random text, or several empty lines, but still be usable HTML.
The magic number approach offers better guarantees that the format will be identified correctly, and can often determine more precise information about the file. Since reasonably reliable "magic number" tests can be fairly complex, and each file must effectively be tested against every possibility in the magic database, this approach is relatively inefficient, especially for displaying large lists of files (in contrast, file name and metadata-based methods need to check only one piece of data, and match it against a sorted index). Also, data must be read from the file itself, increasing latency as opposed to metadata stored in the directory. Where file types don't lend themselves to recognition in this way, the system must fall back to metadata. It is, however, the best way for a program to check if the file it has been told to process is of the correct format: while the file's name or metadata may be altered independently of its content, failing a well-designed magic number test is a pretty sure sign that the file is either corrupt or of the wrong type. On the other hand, a valid magic number does not guarantee that the file is not corrupt or is of a correct type.
So-called shebang lines in script files are a special case of magic numbers. Here, the magic number is human-readable text that identifies a specific command interpreter and options to be passed to the command interpreter.
Another operating system using magic numbers is AmigaOS, where magic numbers were called "Magic Cookies" and were adopted as a standard system to recognize executables in Hunk executable file format and also to let single programs, tools and utilities deal automatically with their saved data files, or any other kind of file types when saving and loading data. This system was then enhanced with the Amiga standard Datatype recognition system. Another method was the FourCC method, originating in OSType on Macintosh, later adapted by Interchange File Format (IFF) and derivatives.
External metadata
A final way of storing the format of a file is to explicitly store information about the format in the file system, rather than within the file itself.
This approach keeps the metadata separate from both the main data and the name, but is also less portable than either filename extensions or "magic numbers", since the format has to be converted from filesystem to filesystem. While this is also true to an extent with filename extensions—for instance, for compatibility with MS-DOS's three character limit—most forms of storage have a roughly equivalent definition of a file's data and name, but may have varying or no representation of further metadata.
Note that zip files or archive files solve the problem of handling metadata. A utility program collects multiple files together along with metadata about each file and the folders/directories they came from all within one new file (e.g. a zip file with extension ). The new file is also compressed and possibly encrypted, but now is transmissible as a single file across operating systems by FTP systems or attached to email. At the destination, it must be unzipped by a compatible utility to be useful, but the problems of transmission are solved this way.
Mac OS type-codes
The Mac OS' Hierarchical File System stores codes for creator and type as part of the directory entry for each file. These codes are referred to as OSTypes. These codes could be any 4-byte sequence, but were often selected so that the ASCII representation formed a sequence of meaningful characters, such as an abbreviation of the application's name or the developer's initials. For instance a HyperCard "stack" file has a creator of (from Hypercard's previous name, "WildCard") and a type of . The BBEdit text editor has a creator code of referring to its original programmer, Rich Siegel. The type code specifies the format of the file, while the creator code specifies the default program to open it with when double-clicked by the user. For example, the user could have several text files all with the type code of , but which each open in a different program, due to having differing creator codes. This feature was intended so that, for example, human-readable plain-text files could be opened in a general purpose text editor, while programming or HTML code files would open in a specialized editor or IDE. However, this feature was often the source of user confusion, as which program would launch when the files were double-clicked was often unpredictable. RISC OS uses a similar system, consisting of a 12-bit number which can be looked up in a table of descriptions—e.g. the hexadecimal number is "aliased" to , representing a PostScript file.
Mac OS X uniform type identifiers (UTIs)
A Uniform Type Identifier (UTI) is a method used in macOS for uniquely identifying "typed" classes of entity, such as file formats. It was developed by Apple as a replacement for OSType (type & creator codes).
The UTI is a Core Foundation string, which uses a reverse-DNS string. Some common and standard types use a domain called (e.g. for a Portable Network Graphics image), while other domains can be used for third-party types (e.g. for Portable Document Format). UTIs can be defined within a hierarchical structure, known as a conformance hierarchy. Thus, conforms to a supertype of , which itself conforms to a supertype of . A UTI can exist in multiple hierarchies, which provides great flexibility.
In addition to file formats, UTIs can also be used for other entities which can exist in macOS, including:
Pasteboard data
Folders (directories)
Translatable types (as handled by the Translation Manager)
Bundles
Frameworks
Streaming data
Aliases and symlinks
OS/2 extended attributes
The HPFS, FAT12 and FAT16 (but not FAT32) filesystems allow the storage of "extended attributes" with files. These comprise an arbitrary set of triplets with a name, a coded type for the value and a value, where the names are unique and values can be up to 64 KB long. There are standardized meanings for certain types and names (under OS/2). One such is that the ".TYPE" extended attribute is used to determine the file type. Its value comprises a list of one or more file types associated with the file, each of which is a string, such as "Plain Text" or "HTML document". Thus a file may have several types.
The NTFS filesystem also allows storage of OS/2 extended attributes, as one of the file forks, but this feature is merely present to support the OS/2 subsystem (not present in XP), so the Win32 subsystem treats this information as an opaque block of data and does not use it. Instead, it relies on other file forks to store meta-information in Win32-specific formats. OS/2 extended attributes can still be read and written by Win32 programs, but the data must be entirely parsed by applications.
POSIX extended attributes
On Unix and Unix-like systems, the ext2, ext3, ext4,ReiserFS version 3, XFS, JFS, FFS, and HFS+ filesystems allow the storage of extended attributes with files. These include an arbitrary list of "name=value" strings, where the names are unique and a value can be accessed through its related name.
PRONOM unique identifiers (PUIDs)
The PRONOM Persistent Unique Identifier (PUID) is an extensible scheme of persistent, unique and unambiguous identifiers for file formats, which has been developed by The National Archives of the UK as part of its PRONOM technical registry service. PUIDs can be expressed as Uniform Resource Identifiers using the namespace. Although not yet widely used outside of UK government and some digital preservation programmes, the PUID scheme does provide greater granularity than most alternative schemes.
MIME types
MIME types are widely used in many Internet-related applications, and increasingly elsewhere, although their usage for on-disc type information is rare. These consist of a standardised system of identifiers (managed by IANA) consisting of a type and a sub-type, separated by a slash—for instance, or . These were originally intended as a way of identifying what type of file was attached to an e-mail, independent of the source and target operating systems. MIME types identify files on BeOS, AmigaOS 4.0 and MorphOS, as well as store unique application signatures for application launching. In AmigaOS and MorphOS the Mime type system works in parallel with Amiga specific Datatype system.
There are problems with the MIME types though; several organisations and people have created their own MIME types without registering them properly with IANA, which makes the use of this standard awkward in some cases.
File format identifiers (FFIDs)
File format identifiers is another, not widely used way to identify file formats according to their origin and their file category. It was created for the Description Explorer suite of software. It is composed of several digits of the form . The first part indicates the organisation origin/maintainer (this number represents a value in a company/standards organisation database), the 2 following digits categorize the type of file in hexadecimal. The final part is composed of the usual filename extension of the file or the international standard number of the file, padded left with zeros. For example, the PNG file specification has the FFID of where indicates an image file, is the standard number and indicates the International Organization for Standardization (ISO).
File content based format identification
Another but less popular way to identify the file format is to examine the file contents for distinguishable patterns among file types. The contents of a file are a sequence of bytes and a byte has 256 unique permutations (0–255). Thus, counting the occurrence of byte patterns that is often referred as byte frequency distribution gives distinguishable patterns to identify file types. There are many content-based file type identification schemes that use byte frequency distribution to build the representative models for file type and use any statistical and data mining techniques to identify file types
File structure
There are several types of ways to structure data in a file. The most usual ones are described below.
Unstructured formats (raw memory dumps)
Earlier file formats used raw data formats that consisted of directly dumping the memory images of one or more structures into the file.
This has several drawbacks. Unless the memory images also have reserved spaces for future extensions, extending and improving this type of structured file is very difficult. It also creates files that might be specific to one platform or programming language (for example a structure containing a Pascal string is not recognized as such in C). On the other hand, developing tools for reading and writing these types of files is very simple.
The limitations of the unstructured formats led to the development of other types of file formats that could be easily extended and be backward compatible at the same time.
Chunk-based formats
In this kind of file structure, each piece of data is embedded in a container that somehow identifies the data. The container's scope can be identified by start- and end-markers of some kind, by an explicit length field somewhere, or by fixed requirements of the file format's definition.
Throughout the 1970s, many programs used formats of this general kind. For example, word-processors such as troff, Script, and Scribe, and database export files such as CSV. Electronic Arts and Commodore-Amiga also used this type of file format in 1985, with their IFF (Interchange File Format) file format.
A container is sometimes called a "chunk", although "chunk" may also imply that each piece is small, and/or that chunks do not contain other chunks; many formats do not impose those requirements.
The information that identifies a particular "chunk" may be called many different things, often terms including "field name", "identifier", "label", or "tag". The identifiers are often human-readable, and classify parts of the data: for example, as a "surname", "address", "rectangle", "font name", etc. These are not the same thing as identifiers in the sense of a database key or serial number (although an identifier may well identify its associated data as such a key).
With this type of file structure, tools that do not know certain chunk identifiers simply skip those that they do not understand. Depending on the
actual meaning of the skipped data, this may or may not be useful (CSS explicitly defines such behavior).
This concept has been used again and again by RIFF (Microsoft-IBM equivalent of IFF), PNG, JPEG storage, DER (Distinguished Encoding Rules) encoded streams and files (which were originally described in CCITT X.409:1984 and therefore predate IFF), and Structured Data Exchange Format (SDXF).
Indeed, any data format must somehow identify the significance of its component parts, and embedded boundary-markers are an obvious way to do so:
MIME headers do this with a colon-separated label at the start of each logical line. MIME headers cannot contain other MIME headers, though the data content of some headers has sub-parts that can be extracted by other conventions.
CSV and similar files often do this using a header records with field names, and with commas to mark the field boundaries. Like MIME, CSV has no provision for structures with more than one level.
XML and its kin can be loosely considered a kind of chunk-based format, since data elements are identified by markup that is akin to chunk identifiers. However, it has formal advantages such as schemas and validation, as well as the ability to represent more complex structures such as trees, DAGs, and charts. If XML is considered a "chunk" format, then SGML and its predecessor IBM GML are among the earliest examples of such formats.
JSON is similar to XML without schemas, cross-references, or a definition for the meaning of repeated field-names, and is often convenient for programmers.
YAML is similar to JSON, but use indentation to separate data chunks and aim to be more human-readable than JSON or XML.
Protocol Buffers are in turn similar to JSON, notably replacing boundary-markers in the data with field numbers, which are mapped to/from names by some external mechanism.
Directory-based formats
This is another extensible format, that closely resembles a file system (OLE Documents are actual filesystems), where the file is composed of 'directory entries' that contain the location of the data within the file itself as well as its signatures (and in certain cases its type). Good examples of these types of file structures are disk images, OLE documents TIFF, libraries. ODT and DOCX, being PKZIP-based are chunked and also carry a directory.
See also
Audio file format
Chemical file format
Comparison of executable file formats
Digital container format
Document file format
DROID file format identification utility
File (command), a file type identification utility
File conversion
Future proofing
Graphics file format summary
Image file formats
List of archive formats
List of file formats
List of file signatures, or "magic numbers"
List of filename extensions (alphabetical)
List of free file formats
List of motion and gesture file formats
Magic number (programming)
Object file
Video file format
Windows file types
References
External links
("The file formats you use have a direct impact on your ability to open those files at a later date and on the ability of other people to access those data") |
13097460 | https://en.wikipedia.org/wiki/Archos%20Generation%205 | Archos Generation 5 | Archos Generation 5 is a series of portable media players introduced in 2007.
705 WiFi
The Archos 705 WiFi was released on November 16, 2007 in capacities of 80GB and 160GB, with the same overall design of the last generation and an updated operating system and hardware compatibility. Unlike the 405 and 605 WiFi, the 705 WiFi has an 18-bit color depth instead of 24-bit Truecolor.
605 WiFi
The Archos 605 WiFi was launched on September 1, 2007. While the player is a technical update from the previous generation, the 605 WiFi is released in larger capacities, not unlike the 504. The 4 GB flash-based version includes a SDHC card slot. The player is also available in hard drive capacities of 20 GB, 30 GB, 40 GB (only in the UK), include a thinner profile and smaller buttons. The button layout remains the same but with the addition of a volume control. The battery is now irremovable, though an extended battery pack is available optionally, and the kickstand is moved to the edge of the player. The 605 WiFi features support for Adobe Flash, which equips a portable version of Opera. An online content portal is linked to directly purchase or rent media from various distributors.
A special edition of the 605 WiFi was released in Europe. This version has a 20GB capacity and is bundled with Charlie Chaplin movies and a custom engraving on the back. A Harry Potter edition was later released.
605 GPS
Archos released a variant of the 605 WiFi with an additional GPS feature. It was available in a 4 GB flash-based capacity and a 30 GB hard drive capacity.
405
Flash based, comes in 2 GB with SDHC expansion, and a 3.5" QVGA TFT screen in 4:3 format.
Available in a limited edition lilac colour.
A hard drive version of this player has been released in Europe in a capacity of 30GB.
105
The Archos 105 is a flash-based PMP with a OLED display that is capable of WMV playback. The player is currently available in a 2 GB capacity.
Features
DVR Station
The optional DVR station plays back video in DVD quality resolution through the composite, S-video, RGB or YPbPr video outputs with the audio in stereo or 5.1 surround sound through the SPDIF output. An external harddrive can be attached to the DVR station via a USB hub to increase the storage capacity. The DVR feature has an infrared emitter that can control many different brands of TV's, cable boxes, and satellite boxes. An online TV guide can show scheduled broadcasts which you can set to be downloaded and viewed at a later time. The DVR station also comes with a qwerty remote control to surf the web or to use the TV Program Guide directly on a TV screen. Note: The online TV guide service has been discontinued in North America.
Content Portal
The Archos Content Portals (ACP) allow the 605 WiFi and 705 WiFi to directly purchase or rent videos on the device through wireless internet. The content on the ACPs are provided through other companies that have their own "portal". Currently only CinemaNow has service for North American users. There has also been announced a deal with Paramount in April 2008.
Web Browser
The web browser for the 605 WiFi and 705 WiFi is an iteration of Opera for devices. Unlike with the 604 WiFi, it is available as an optional plug-in. With Generation 5 Flash 7 support was added. On November 28, 2007, along with game packs, 7 widgets were added free as part of the Web-Browser plug-in. In April 2008, an update to Flash 9 was added giving better compatibility with the latest flash applications.
GPS
A GPS add-on was announced in April 2008 for the 605 and 705 with support for the Europe, North America, and China regions. The GPS is activated when a clamp is combined with the player, which gives the Archos 605 GPS and real-time traffic report.
WebTV & Radio
Archos Web TV: This allows you to aggregate and watch streams of web TV and video.
Archos Web Radio: This allows you to aggregate and watch streams of web radio and audio.
Source Code Release
Archos has also released the source code for the current generation players. However, hardware locks have disabled any easy attempt at creating homebrew firmware without the use of an exploit.
GFT Exploit
A software-only exploit called GFT was discovered for several Archos players, including the 605 WiFi and 705 models, that enabled the user to run Linux commands directly on the system, and further uses this to install an SSH server on the system, allowing remote root access. However, the 2.x versions of the firmware have disabled the ability to apply the GFT exploit. This is an undocumented fix on Archos' side and is not listed in the change-form for the release. They automatically include 2.0 and later on all newly shipping 605 units.
QTopia
The 605 WiFi, along with, reportedly, the 705 WiFi and 604 WiFi, were successfully hacked to run the Linux platform QTopia with help from the same users who had done so on the older PMA400. QTopia can only be installed on units that are compatible with the GFT exploit.
openPMA-NG
The exploit also triggers the development of openPMA-NG, based on pdaXrom. Updates show compatibility for common Linux applications such as GIMP and Mozilla Firefox
See also
Comparison of portable media players
Portable media player
List of portable media players with Wi-Fi connectivity
References
External links
Manufacturer's Website
Archos 605 WiFi and DVR Station hands on review at ArsGeek
Archos 605 WiFi at WikiSpecs
Portable media players
Products introduced in 2007 |
5994421 | https://en.wikipedia.org/wiki/Lego%20Mindstorms%20NXT | Lego Mindstorms NXT | Lego Mindstorms NXT is a programmable robotics kit released by Lego in late July 2006.
It replaced the first-generation Lego Mindstorms kit, which was called the Robotics Invention System. The base kit ships in two versions: the Retail Version (set #8527) and the Education Base Set (set #9797). It comes with the NXT-G programming software, or optionally LabVIEW for Lego Mindstorms. A variety of unofficial languages exist, such as NXC, NBC, leJOS NXJ, and RobotC. The second generation of the set, the Lego Mindstorms NXT 2.0, was released on August 1, 2009, featuring a color sensor and other upgraded capabilities. The third generation, the EV3, was released in September 2013.
NXT Intelligent Brick
The main component in the kit is a brick-shaped computer called the NXT Intelligent Brick. It can take input from up to four sensors and control up to three motors, via a modified version of RJ12 cables, very much similar to but incompatible with RJ11 phone cords. The plastic pin to hold the cable in the socket is moved slightly to the right. The brick has a 100×64 pixel monochrome LCD and four buttons that can be used to navigate a user interface using hierarchical menus. It has a 32-bit ARM7TDMI-core Atmel AT91SAM7S256 microcontroller with 256 KB of FLASH memory and 64 KB of RAM, plus an 8-bit Atmel AVR ATmega48 microcontroller, and bluetooth support. It also has a speaker and can play sound files at sampling rates up to 8 kHz. Power is supplied by 6 AA (1.5 V each) batteries in the consumer version of the kit and by a Li-Ion rechargeable battery and charger in the educational version.
The Intelligent Brick remains unchanged with NXT 2.0. A black version of the brick was made to celebrate the 10th anniversary of the Mindstorms System with no change to the internals.
Development kits
Lego has released the firmware for the NXT Intelligent Brick as open source, along with schematics for all hardware components.
Several developer kits are available that contain documentation for the NXT:
Software Developer Kit (SDK), includes information on host USB drivers, executable file format, and bytecode reference
Hardware Developer Kit (HDK), includes documentation and schematics for the NXT brick and sensors
Bluetooth Developer Kit (BDK), documents the protocols used for Bluetooth communications
Programming
Very simple programs can be created using the menu on the NXT Intelligent Brick. More complicated programs and sound files can be downloaded using a USB port or wirelessly using Bluetooth. Files can also be copied between two NXT bricks wirelessly, and some mobile phones can be used as a remote control. Up to three NXT bricks can communicate simultaneously via Bluetooth when user created programs are run.
The retail version of the kit includes software for writing programs that run on Windows and Mac OS personal computers. The software is based on National Instruments LabVIEW and provides a visual programming language for writing simple programs and downloading them to the NXT Brick. This means that rather than requiring users to write lines of code, they instead can use flowchart like "blocks" to design their program.
NXT-G
NXT-G v2.0 is a graphical programming environment that comes bundled with the NXT. With careful construction of blocks and wires to encapsulate complexity, NXT-G can be used for real-world programming. Parallel "sequence beams" are actually parallel threads, so this software is quite good for running a handful of parallel sense/respond loops (example: wait 60 seconds, play a "bonk" sound at low volume if battery is low, loop), or blending autonomous control with bluetooth or other "remote control". The language supports virtual instruments for all Lego branded and most 3rd party sensors/components. Version 2.0 contains new tutorial challenges, a remote control, custom graphics and sound designers, and new Lego color sensor support. Community support is significant, for example: http://www.brickshelf.com/cgi-bin/gallery.cgi?f=191310
C# with Microsoft Robotics Developer Studio
Free tools (Visual Studio Express in combination with the Robotics Developer Studio) enable programming the NXT using the C# language. Other supported languages include IronPython and VB.NET.
BricxCC, Next Byte Codes, Not eXactly C
Bricx Command Center (BricxCC) is the integrated development environment (IDE) used to write, compile, and edit NBC and NXC programs for the NXT. Also, as BricxCC was originally made for the RCX, programs for it can be written using NQC via BricxCC.
Different firmware versions can be flashed to the NXT using BricxCC.
BricxCC has many utilities such as NeXTExplorer (upload/download files, defragment the NXT, use file hex viewer), NeXTScreen (view what's on the NXT's LCD, and capture images and video).
Next Byte Codes (NBC) is a simple open source language with an assembly language syntax that can be used to program the NXT brick. BricxCC also has the capability to decompile standard .rxe NXT executables to NBC.
Not eXactly C (NXC) is a high level open-source language, similar to C, built on the NBC compiler. It can also be used to program the NXT brick. NXC is basically NQC for the NXT. It is one of the most widely used third-party programming languages for the NXT. In NXC, even creating video games for the NXT is possible. Some people have even got working grayscale on the NXT Screen.
Robolab
Robolab 2.9
Robolab is the newer programming environment originally used on the RCX programmable brick. Version 2.9 has been updated so that it can be used to program the NXT brick. Lego has announced that it will stop officially supporting Robolab but Robolab 2.9 is still available and there are still many user forums and other sources of help available.
RoboMind
RoboMind is educational software that is specially developed to teach students about logic, programming and robotics. The strength of RoboMind is the compactness of the learning environment, which allows to quickly develop and test scripts in a virtual environment. The scripts can then directly be transferred to a Lego Mindstorms NXT robot, to see the result in real life. RoboMind script run on the standard firmware.
Enchanting
Enchanting brings NXT programming into the popular Scratch IDE, designed by the Lifelong Kindergarten Group at MIT to make programming intuitive even for young children. The resulting NXT programs have the compactness and clarity offered by that programming environment.
ROBOTC
ROBOTC is a programming-language based on C for VEX, the VEX Cortex, FIRST Tech Challenge, and Lego Mindstorms. ROBOTC runs a very optimized firmware which allows the NXT to run programs very quickly, and also compresses the files so that a large number of programs can fit into the NXT. Like other NXT languages, ROBOTC requires this firmware to be downloaded from the ROBOTC interface in order to run.
NXTGCC
NXTGCC is a GCC toolchain for programming the NXT firmware in C.
leJOS NXT
leJOS NXJ is a high level open source language based on Java that uses custom firmware developed by the leJOS team.
nxtOSEK
To be able to write in C/C++, nxtOSEK can be used, but that requires custom firmware too.
ICON
To write files on the NXT itself, ICON by Steve Hassenplug is an ideal resource.
MATLAB and Simulink
MATLAB is a high-level programming language for numerical computing, data acquisition, and analysis. It can be used to control Lego NXT robots over a Bluetooth serial port (serial port communication is part of the base functionality of MATLAB) or via a USB connection; for example using the RWTH – Mindstorms NXT Toolbox (free & open-source).
Simulink is a block diagram environment for modeling and simulating dynamic systems. Using Simulink, a user can design and simulate control algorithms and Lego systems, and subsequently automatically program the Lego NXT or EV3. Support for programming the Lego NXT or EV3 only requires Simulink and is available at no additional charge.
MATLAB and Simulink Support for Lego Mindstorms programming is freely available. More information is at Lego Mindstorms Support from MATLAB and Simulink
Lua
pbLua is a port of the Lua programming language, a general purpose scripting language, for Lego Mindstorms.
Ada
A port of GNAT is available for the NXT. It relies on a dedicated run-time kernel based on the Ravenscar profile, the same used on the Goce satellite: this permits to use high-level Ada features to develop concurrent and real-time systems on the Mindstorms NXT.
URBI
URBI is yet another language and is a parallel and event-driven language, with interfaces to C++/Java and Matlab. It also has a component architecture (UObject) for distribution. Urbi is compatible with many robots, including Nao (cf Robocup), Bioloid or Aibo.
FLL NXT Navigation
FLL Nxt Navigation An open source program to help navigation on the FLL competition table. It uses NXT-G and .txt files to write programs. It is unknown if you can legally implement this in FLL competitions.
Ruby-nxt
Ruby-nxt is a library to program the NXT for the Ruby programming language. Unlike the other languages for the NXT, the code is not compiled to a binary file. Instead the code is directly transmitted to the NXT via a Bluetooth connection.
Robotics.NXT
Robotics.NXT is a Haskell interface to NXT over Bluetooth. It supports direct commands, messages and many sensors (also unofficial). It has also support for a simple message-based control of a NXT brick via remotely executed program (basic NXC code included).
LibNXT
LibNXT is a utility library for talking to the Lego Mindstorms NXT intelligent brick at a relatively low level. LibNXT is targeted mainly at the platforms that the official Lego Mindstorms NXT software overlooks, namely Linux and other unices. It will work on any POSIX-compliant operating system where libusb 0.1 libusb is supported. Windows support is also possible with the win32 port of libusb.
C_NXT
C_NXT is a library for controlling the Lego NXT licensed under the GPLv2. The library allows users to control a Lego NXT via bluetooth controller from within other C programs. The library provides low level control and high level abstraction. The library only runs on Linux.
PyNXC
PyNXC is a project which converts Python code to "Not Exactly C" (NXC) code, to download to Lego Mindstorms Robots.
NXT-Python
NXT-Python is a python module, which communicates with the NXT via USB or Bluetooth. It supports direct commands and several aftermarket sensors.
LEGO Mindstorms EV3 Software
The software which ships with the newer Mindstorms EV3 set can be used to program the NXT. At the moment, Bluetooth is not supported for the NXT, so programs must be downloaded via a USB cable.
Physical Etoys
Physical Etoys is a visual programming system for different electronic devices. It supports direct mode and compiled mode.
C/C++ Interpreter Ch
Ch is a C/C++ interpreter running C/C++ code to control Lego NXT or EV3. No firmware upload/download is required, no compilation is needed. A C/C++ code running in Ch can control either a Lego NXT, EV3, or multiple of NXT/EV3.
Sensors
The Lego Mindstorms NXT 1.0 base kit includes:
3 identical servo motors that have built-in reduction gear assemblies with internal optical rotary encoders that sense their rotations within one degree of accuracy.
The touch sensor detects whether it is currently pressed, has been bumped, or released. The orange Enter button and the gray right and left NXT buttons can be programmed to serve as touch sensors. In the NXT-G programming software, a value of 0 is given out when it is not pressed, and a value of 1 is given out if it is pressed down.
The light sensor detects the light level in one direction, and also includes a LED for illuminating an object. The light sensor can sense reflected light values (using the built-in red LED), or ambient light. In the NXT-G programming software the sensor senses light on a scale of 0 to 100, 100 being very bright and 0 being dark. If calibrated, the sensor can also be used as a distance sensor.
The sound sensor measures volume level on a scale of 0 to 100, 100 being very loud, 0 being completely silent.
The ultrasonic sensor can measure the distance from the sensor to something that it is facing, and detect movement. It can show the distance in both centimeters and inches. The maximum distance it can measure is 233 cm with a precision of 3 centimeters. The ultrasonic sensor works by sending out ultrasonic sound waves that bounce off an object ahead of it and then back. It senses the time it took for that to happen. In the Lego Mindstorms 2.0 base kit, it includes: 2 Touch sensors, one Color sensor (detects several different colors), and an Ultrasonic sensor.
These parts are not included in the Lego Mindstorms NXT base kit and may be bought separately:
Third-party companies also manufacture sensors such as the compass, gyroscope, infrared tracker, RFID reader and accelerometer sensors sold by Lego.
The temperature sensor can measure temperature in Celsius or Fahrenheit.
The sensors come assembled and programmed. In the software (see Programming above), people can decide what to do with the information that comes from the sensors, such as programming the robot move forward until it touches something.
Lego also sells an adapter to the Vernier sensor product line. Vernier produces data collection devices and related software for use in education.
Connector
Sensors are connected to the NXT brick using a 6-position modular connector that features both analog and digital interfaces. The analog interface is backward-compatible (using an adapter) with the older Robotics Invention System. The digital interface is capable of both I2C and RS-485 communication.
NXT 2.0
Lego Mindstorms NXT 2.0 is the second set from LEGO's Lego Mindstorms series, launched on August 5, 2009 at the Lego Shop in the U.S. The set contains 619 pieces, including a new sensor that can detect colors. It is priced at approximately US$280, C$350, £230 or A$500. Lego Mindstorms NXT 2.0 has a successor, called the Lego Mindstorms EV3.
8547 Kit Features
Includes a sound editor for recording any sound and then programming the NXT Brick to play it.
Includes an image editor for downloading an image to the NXT Brick to appear on the screen.
Includes 619 pieces (including the NXT Brick)
NXT Intelligent Brick
32-bit Atmel AT91SAM7S256 main microcontroller (256 KB flash memory, 64 KB RAM)
8-bit Atmel ATmega48 microcontroller @ 4 MHz (4 KB flash memory, 512 Bytes RAM)
100×64 pixel LCD screen
Four RJ12 input ports (ports 1-4)
Three RJ12 output ports (ports A-C)
USB port
Bluetooth Class II V2.0
Loudspeaker – 8 kHz sound quality, 8-bit resolution, 2–16 kHz sample rate
Four push buttons, used to navigate menus and can be used in programs.
Powered by six AA batteries or the NXT rechargeable battery
Sensors
Parts can be ordered separately. In the original kit, the sensors included are the color sensor, two touch sensors, and an ultrasonic sensor:
Color sensor (9694), for detecting 6 different colors: blue, green, red, yellow, white, black
Light sensor (9844), for detecting levels of light. (Included in first version, but in 2.0, replaced by color sensor.)
Touch sensor (9843), a simple button that senses if something collided with it.
Ultrasonic sensor (9846), for measuring distances using inaudible sound waves.
Sound sensor (9845), for basic "hearing". Capable of measuring volume, but cannot record actual sounds.
Compass sensor (MS1034), for detecting direction. Has a built-in calibrator to reduce interference from other magnetic items. (Not included in basic kit, for advanced users.)
Accelerometer sensor (MS1040), for sensing which general direction it's moving in. Also can measure g-force. (Not included in basic kit, for advanced users.)
RFID sensor, for communication between multiple robots. (Not included in basic kit, for VERY advanced users.)
Rotation sensor (built into servo motors), for measuring how far it has turned. This is unique, because it measures based on the turn of the gears inside, rather than the motor itself. Useful for robots that will coast and act based on distance rolled.
Bluetooth communication (built into "Intelligent brick"), for communication with other devices. Can be used mid-program or for downloading new programs and data.
Actuators
Servo motor (9842)
The color sensor can shine light in red, green, or blue. (Normally it senses color by using the lamp in a setting and reading the reflected light levels. It uses the same lamp here for other uses.)
Programming
Very simple programs can be created using the NXT Intelligent Brick itself. In order to create larger, more complex programs, programming software on a PC is required. The standard programming software is NXT-G, which is included in the package. Third-party programming software is also available, some of which is listed below:
NXT-G
NXT-G is the programming software included in the standard base kit. It is based on LabVIEW graphical programming. It features an interactive drag-and-drop environment.
LabVIEW Toolkit
NXT-G is powered by LabVIEW, an industry standard in programming. Created by National Instruments, LabVIEW uses data flow programming to create a virtual instrument. To allow for more advanced programming, in the graphical sense, National Instruments released a Toolkit for the NXT. Version 1.0 came out in December 2006. Since its release, several bugs have been found and new sensors have been created. While the toolkit does allow for the creation of new sensors, National Instruments has yet to formally release an update.
Lego::NXT
LEGO::NXT provides an API between Perl and NXT.
Ada
A port of GNAT is available for the NXT. It requires nxtOSEK to run. The port includes Ada bindings to the NXT hardware and nxtOSEK.
Next Byte Codes & Not eXactly C
Next Byte Codes (NBC) is a simple open-source language with an assembly language syntax that can be used to program the NXT brick.
Not eXactly C (NXC) is a high level open-source language, similar to C, built on top of the NBC compiler. It can also be used to program the NXT brick. NXC is basically NQC for the NXT. It is the most widely used third-party programming language.
ROBOTC
ROBOTC is an integrated development environment targeted towards students that is used to program and control Lego NXT, VEX, RCX, and Arduino robots using a programming language based on the C programming language.
RoboMind
RoboMind is an educational programming environment that offers a concise scripting language for programming a simulated robot. These internationalized scripts can, however, also directly be exported to Lego Mindstorms robots. It does not require custom firmware in order to run.
NXTGCC
NXTGCC is a GCC toolchain for programming the NXT firmware in C.
URBI
URBI is a parallel and event-driven language, with interfaces to C++/Java and MATLAB. It also has a component architecture (UObject) for distributed computation. Urbi is compatible with many robots, including Nao (cf Robocup), Bioloid or Aibo.
leJOS NXJ
leJOS NXJ is a high level open source language based on Java that uses custom firmware developed by the leJOS team.
nxtOSEK
To be able to write in C (programming language)/C++, nxtOSEK can be used, but that requires custom firmware too.
MATLAB and Simulink
MATLAB is a high-level programming language for numerical computing, data acquisition and analysis. It can be used to control Lego NXT robots over a Bluetooth serial port (serial port communication is part of the base functionality of MATLAB) or via a USB connection; for example using the RWTH – Mindstorms NXT Toolbox (free & open-source).
Simulink is a MATLAB-based environment for modeling and simulating dynamic systems. Using Simulink, a user can design control algorithms, automatically generate C code for those algorithms, and download the compiled code onto the Lego NXT.
MATLAB and Simulink code for NXT programming is freely available.
Lua
pbLua is an implementation of the Lua programming language, a general purpose scripting language, for Lego Mindstorms.
FLL NXT Navigation
FLL Nxt Navigation An open source program to help navigation on the FLL competition table.
Uses NXT-G and .txt files to write programs.
ruby-nxt
ruby-nxt is a library to program the NXT for the Ruby programming language. Unlike the other languages for the NXT the code isn't compiled to a binary file. Instead the code is directly transmitted to the NXT via a Bluetooth connection. This method of execution is significantly slower than executing compiled code directly.
Robotics. NXT
Robotics.NXT is a Haskell interface to NXT over Bluetooth. It supports direct commands, messages and many sensors (also unofficial). It has also support for a simple message-based control of a NXT brick via remotely executed program (basic NXC code included).
See also
Braigo Braille Lego printer low-cost project
Lego Mindstorms EV3
Lego Mindstorms
Robotics Invention System
URBI
Robotics suite
Dexter Industries – Sensors for the Lego Mindstorms NXT
FIRST Lego League – A competition with the Lego Mindstorms NXT robot
RobotAppStore – Apps for Robots (including Lego Mindstorms NXT)
Robots
Notes
External links
lego.Edutech.com, Official Lego Education partner
external controller with open hardware beaglebone
Program NXT, help for programming your Lego Mindstorms NXT
HiTechnic.com, LEGO Certified Sensors for the Lego Mindstorms
mindsensors.com, Sensors for the Lego Mindstorms NXT
Trinfactor3.com, Enables use of 32 analog sensors with 1 NXT
robojoy-club, NXT robot and program for beginner
http://www.legomindstormsnxtstore.blogspot.com
Roberta, Educational Robotics
Lego Mindstorms Community and Projects
Read This Review Before You Buy Lego Mindstorms EV3
Lego Mindstorms NXT and Lego Mindstorms NXT 2.0 Projects
The NXT 2.0 Shooterbot in action
Lua (programming language)-scriptable hardware
Robot kits
Products introduced in 2006
2006 in robotics |
12751624 | https://en.wikipedia.org/wiki/Searchlight%20BBS | Searchlight BBS | Searchlight BBS is a bulletin board system (BBS) developed in 1985 by Frank LaRosa for the TRS-80. LaRosa formed a company, Searchlight Software, through which he marketed and sold Searchlight BBS. In 1987, LaRosa expanded the software and sold it as shareware written for the PC in Pascal (using Turbo Pascal). The features of Searchlight BBS included a full screen text editor, a remote DOS shell, and file transfer via the XMODEM protocol. Searchlight BBS rapidly grew in popularity, and appeared frequently in Boardwatch magazine and at BBS conventions across the United States. Eventually, Searchlight BBS supported FidoNet, ZMODEM, Internet e-mail and telnet connectivity.
In 1995 LaRosa began work on Spinnaker Web Server, to compete with Netscape and other web server software. Searchight Software sold Searchlight BBS, along with Spinnaker Web Server, to TeleGrafix Communications in 1998.
References
External links
The Trashcan BBS (Still online in Christchurch, New Zealand)(telnet to bbs.thenet.gen.nz port 2324)
The Searchlight BBS Support Page
Frank LaRosa's Personal Homepage (Creator of Searchight Software)
Telnet BBS Guide
The BBS Archives
BBS Documentary Video Collection (Internet Archive)
The TEXTFILES.COM Historical BBS List
Bulletin board system software |
47266 | https://en.wikipedia.org/wiki/UNICOS | UNICOS | UNICOS is a range of Unix and after it Linux operating system (OS) variants developed by Cray for its supercomputers. UNICOS is the successor of the Cray Operating System (COS). It provides network clustering and source code compatibility layers for some other Unixes. UNICOS was originally introduced in 1985 with the Cray-2 system and later ported to other Cray models. The original UNICOS was based on UNIX System V Release 2, and had many Berkeley Software Distribution (BSD) features (e.g., computer networking and file system enhancements) added to it.
Development
CX-OS was the original name given to what is now UNICOS. This was a prototype system which ran on a Cray X-MP in 1984 before the Cray-2 port. It was used to demonstrate the feasibility of using Unix on a supercomputer system, before Cray-2 hardware was available.
The operating system revamp was part of a larger movement inside Cray Research to modernize their corporate software: including rewriting their most important Fortran compiler (cft to cft77) in a higher-level language (Pascal) with more modern optimizations and vectorizations.
As a migration path for existing COS customers wishing to transition to UNICOS, a Guest Operating System (GOS) capability was introduced into COS. The only guest OS that was ever supported was UNICOS. A COS batch job would be submitted to start up UNICOS, which would then run as a subsystem under COS, using a subset of the systems CPUs, memory, and peripheral devices. The UNICOS that ran under GOS was exactly the same as when it ran stand-alone: the difference was that the kernel would make certain low-level hardware requests through the COS GOS hook, rather than directly to the hardware.
One of the sites that ran very early versions of UNICOS was Bell Labs, where Unix pioneers including Dennis Ritchie ported parts of their Eighth Edition Unix (including STREAMS input/output (I/O)) to UNICOS. They also experimented with a guest facility within UNICOS, allowing the stand-alone version of the OS to host itself.
Releases
Cray released several different OSs under the name UNICOS, including:
UNICOS: the original Cray Unix, based on System V. Used on the Cray-1, Cray-2, X-MP, Y-MP, C90, etc.
UNICOS MAX: a Mach-based microkernel used on the T3D's processing elements, together with UNICOS on the host Y-MP or C90 system.
UNICOS/mk: a serverized version of UNICOS using the Chorus microkernel to make a distributed operating system. Used on the T3E. This was the last Cray OS really based on UNICOS sources, as the following products were based on different sources and simply used the "UNICOS" name.
UNICOS/mp: not derived from UNICOS, but based on IRIX 6.5. Used on the X1.
UNICOS/lc: not derived from UNICOS, but based on SUSE Linux. Used on the XT3, XT4 and XT5. UNICOS/lc 1.x comprises a combination of
the compute elements run the Catamount microkernel (which itself is based on Cougaar)
the service elements run SUSE Linux
Cray Linux Environment (CLE): from release 2.1 onward, UNICOS/lc is now called Cray Linux Environment
the compute elements run Compute Node Linux (CNL) (which is a customized Linux kernel)
the service elements run SUSE Linux Enterprise Server
See also
Scientific Linux, a Linux distribution by Fermilab and CERN
Rocks Cluster Distribution, a Linux distribution for supercomputers
References
Cray software
Microkernel-based operating systems
Microkernels
Supercomputer operating systems
UNIX System V
Unix distributions
Linux distributions
1984 software |
70128011 | https://en.wikipedia.org/wiki/2022%20USC%20Trojans%20men%27s%20volleyball%20team | 2022 USC Trojans men's volleyball team | The 2022 USC Trojans men's volleyball team represents the University of Southern California in the 2022 NCAA Division I & II men's volleyball season. The Trojans, led by seventh year head coach Jeff Nygaard, play their home games at Galen Center. The Trojans are members of the MPSF and were picked to finish fifth in the MPSF preseason poll.
Season highlights
Will be filled in as the season progresses.
Roster
Schedule
TV/Internet Streaming information:
All home games will be televised on Pac-12 Network or streamed on Pac-12+ USC. Most road games will also be streamed by the schools streaming service. The conference tournament will be streamed by FloVolleyball.
*-Indicates conference match.
Times listed are Pacific Time Zone.
Announcers for televised games
UC Santa Barbara: Max Kelton & Katie Spieler
UC Santa Barbara: Max Kelton & Katie Spieler
Princeton: Mark Beltran & Paul Duchesne
Erskine: Mark Beltran & Paul Duchesne
Penn State: Anne Marie Anderson
Ohio State: Denny Cline
UC Irvine: Brian Webber
UC Irvine: Rob Espero & Charlie Brande
UC San Diego: Brian Webber
Long Beach State: Matt Brown & Matt Prosser
UC Santa Barbara: Kevin Barnett
CSUN: Mark Beltran & Paul Duchesne
Pepperdine: Al Epstein
Pepperdine: Anne Marie Anderson
BYU: Mark Beltran & Paul Duchesne
BYU: Anne Marie Anderson
George Mason:
Vanguard:
Menlo:
Stanford:
Stanford:
UCLA:
UCLA:
Concordia Irvine:
Concordia Irvine:
Grand Canyon:
Grand Canyon:
MPSF Tournament:
Rankings
^The Media did not release a Pre-season poll.
References
2022 in sports in California
2022 NCAA Division I & II men's volleyball season
USC |
27815578 | https://en.wikipedia.org/wiki/Minecraft | Minecraft | Minecraft is a sandbox video game developed by the Swedish video game developer Mojang Studios. The game was created by Markus "Notch" Persson in the Java programming language. Following several early private testing versions, it was first made public in May 2009 before fully releasing in November 2011, with Jens "Jeb" Bergensten then taking over development. Minecraft has since been ported to several other platforms and is the best-selling video game of all time, with over 238 million copies sold and nearly 140 million monthly active users .
In Minecraft, players explore a blocky, procedurally generated 3D world with virtually infinite terrain, and may discover and extract raw materials, craft tools and items, and build structures, earthworks and simple machines. Depending on game mode, players can fight computer-controlled mobs, as well as cooperate with or compete against other players in the same world. Game modes include a survival mode, in which players must acquire resources to build the world and maintain health, and a creative mode, where players have unlimited resources and access to flight. Players can modify the game to create new gameplay mechanics, items, and assets.
Early versions of the game received critical acclaim, winning several awards and being cited as one of the greatest video games of all time. Social media, parodies, adaptations, merchandise, and the annual Minecon conventions played large roles in popularizing the game. The game has also been used in educational environments to teach chemistry, computer-aided design, and computer science. In 2014, Mojang and the Minecraft intellectual property were purchased by Microsoft for billion. A number of spin-off games have also been produced, such as Minecraft: Story Mode, Minecraft Dungeons, and the mobile game Minecraft Earth.
Gameplay
Minecraft is a 3D sandbox game that has no required goals to accomplish, allowing players a large amount of freedom in choosing how to play the game. However, there is an achievement system, known as "advancements" in the Java Edition of the game, and "trophies" on the PlayStation ports. Gameplay is in the first-person perspective by default, but players have the option for third-person perspective. The game world is composed of rough 3D objects—mainly cubes and fluids, and commonly called "blocks"—representing various materials, such as dirt, stone, ores, tree trunks, water, and lava. The core gameplay revolves around picking up and placing these objects. These blocks are arranged in a 3D grid, while players can move freely around the world. Players can "mine" blocks and then place them elsewhere, enabling them to build things. Many commentators have described the game's physics system as unrealistic. The game also contains a material known as redstone, which can be used to make primitive mechanical devices, electrical circuits, and logic gates, allowing for the construction of many complex systems.
The game world is virtually infinite and procedurally generated as players explore it, using a map seed that is obtained from the system clock at the time of world creation (or manually specified by the player). There are limits on vertical movement, but Minecraft allows an infinitely large game world to be generated on the horizontal plane. Due to technical problems when extremely distant locations are reached, however, there is a barrier preventing players from traversing to locations beyond 30,000,000 blocks from the center. The game achieves this by splitting the world data into smaller sections called "chunks" that are only created or loaded when players are nearby. The world is divided into biomes ranging from deserts to jungles to snowfields; the terrain includes plains, mountains, forests, caves, and various lava/water bodies. The in-game time system follows a day and night cycle, and one full cycle lasts 20 real-time minutes.
When starting a new world, players must choose one of five game modes, as well as one of four difficulties, ranging from peaceful to hard. Increasing the difficulty of the game causes the player to take more damage from mobs, as well as having other difficulty-specific effects. For example, the peaceful difficulty prevents hostile mobs from spawning, and the hard difficulty allows players to starve to death if their hunger bar is depleted. Once selected, the difficulty can be changed, but the game mode is locked and can only be changed with cheats.
New players have a randomly selected default character skin of either Steve or Alex, but the option to create custom skins was made available in 2010. Players encounter various non-player characters known as mobs, such as animals, villagers, and hostile creatures. Passive mobs, such as cows, pigs, and chickens, can be hunted for food and crafting materials. They spawn in the daytime, while hostile mobs—including large spiders, skeletons, and zombies—spawn during nighttime or in dark places such as caves. Some hostile mobs, such as zombies, skeletons and drowned (underwater versions of zombies), burn under the sun if they have no headgear. Other creatures unique to Minecraft include the creeper (an exploding creature that sneaks up on the player) and the enderman (a creature with the ability to teleport as well as pick up and place blocks). There are also variants of mobs that spawn in different conditions; for example, zombies have husk variants that spawn in deserts.
Minecraft has two alternative dimensions besides the overworld (the main world): the Nether and the End. The Nether is a hell-like dimension accessed via player-built portals; it contains many unique resources and can be used to travel great distances in the overworld, due to every block traveled in the Nether being equivalent to 8 blocks traveled in the overworld. The player can build an optional boss mob called the Wither out of materials found in the Nether. The End is a barren land consisting of many islands floating above a dark, endless void. A boss dragon called the Ender Dragon dwells on the main island. Killing the dragon opens access to an exit portal, which upon entering cues the game's ending credits and a poem written by Irish novelist Julian Gough. Players are then teleported back to their spawn point and may continue the game indefinitely.
Game modes
Survival mode
In survival mode, players have to gather natural resources such as wood and stone found in the environment in order to craft certain blocks and items. Depending on the difficulty, monsters spawn in darker areas outside a certain radius of the character, requiring players to build a shelter at night. The mode also has a health bar which is depleted by attacks from mobs, falls, drowning, falling into lava, suffocation, starvation, and other events. Players also have a hunger bar, which must be periodically refilled by eating food in-game, except in peaceful difficulty. If the hunger bar is depleted, automatic healing will stop and eventually health will deplete. Health replenishes when players have a nearly full hunger bar or continuously on peaceful difficulty.
Players can craft a wide variety of items in Minecraft. Craftable items include armor, which mitigates damage from attacks; weapons (such as swords or axes), which allows monsters and animals to be killed more easily; and tools, which break certain types of blocks more quickly. Some items have multiple tiers depending on the material used to craft them, with higher-tier items being more effective and durable. Players can construct furnaces, which can cook food, process ores, and convert materials into other materials. Players may also exchange goods with a villager (NPC) through a trading system, which involves trading emeralds for different goods and vice versa.
The game has an inventory system, allowing players to carry a limited number of items. Upon dying, items in the players' inventories are dropped unless the game is reconfigured not to do so. Players then re-spawn at their spawn point, which by default is where players first spawn in the game, and can be reset by sleeping in a bed or using a respawn anchor. Dropped items can be recovered if players can reach them before they disappear or despawn after 5 minutes. Players may acquire experience points by killing mobs and other players, mining, smelting ores, breeding animals, and cooking food. Experience can then be spent on enchanting tools, armor and weapons. Enchanted items are generally more powerful, last longer, or have other special effects.
Hardcore mode
Hardcore mode is a survival mode variant that is locked to the hardest setting and has permadeath. If a player dies in a hardcore world, they are no longer allowed to interact with it, so they can either be put into spectator mode and explore the world or delete it entirely. This game mode can only be accessed within the Java Edition of Minecraft.
Creative mode
In creative mode, players have access to nearly all resources and items in the game through the inventory menu, and can place or remove them instantly. Players can toggle the ability to fly freely around the game world at will, and their characters do not take any damage and are not affected by hunger. The game mode helps players focus on building and creating projects of any size without disturbance.
Adventure mode
Adventure mode was designed specifically so that players could experience user-crafted custom maps and adventures. Gameplay is similar to survival mode but with various restrictions, which can be applied to the game world by the creator of the map. This forces players to obtain the required items and experience adventures in the way that the map maker intended. Another addition designed for custom maps is the command block; this block allows map makers to expand interactions with players through scripted server commands.
Spectator mode
Spectator mode allows players to fly through blocks and watch gameplay without directly interacting. Players do not have an inventory, but can teleport to other players and view from the perspective of another player or creature. This game mode can only be accessed within Java Edition and Console Legacy Editions.
Multiplayer
Multiplayer in Minecraft enables multiple players to interact and communicate with each other on a single world. It is available through direct game-to-game multiplayer, LAN play, local split screen (console-only), and servers (player-hosted and business-hosted). Players can run their own servers, use a hosting provider, or connect directly to another player's game via Xbox Live. Single-player worlds have local area network support, allowing players to join a world on locally interconnected computers without a server setup. Minecraft multiplayer servers are guided by server operators, who have access to server commands such as setting the time of day and teleporting players. Operators can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having their own unique rules and customs. The largest and most popular server is Hypixel, which has been visited by over 14 million unique players. Player versus player combat (PvP) can be enabled to allow fighting between players. Many servers have custom plugins that allow actions that are not normally possible.
Minecraft Realms
In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use IP addresses. Minecraft: Java Edition Realms server owners can invite up to twenty people to play on their server, with up to ten players online at a time. Minecraft Realms server owners can invite up to 3000 people to play on their server, with up to ten players online at one time. The Minecraft: Java Edition Realms servers do not support user-made plugins, but players can play custom Minecraft maps. Minecraft Realms servers support user-made add-ons, resource packs, behavior packs, and custom Minecraft maps. At Electronic Entertainment Expo 2016, support for cross-platform play between Windows 10, iOS, and Android platforms was added through Realms starting in June 2016, with Xbox One and Nintendo Switch support to come later in 2017, and support for virtual reality devices. On 31 July 2017, Mojang released the beta version of the update allowing cross-platform play. Nintendo Switch support for Realms was released in July 2018.
Customization
The modding community consists of fans, users and third-party programmers. Using a variety of application program interfaces that have arisen over time, they have produced a wide variety of downloadable content for Minecraft, such as modifications, texture packs and custom maps. Modifications of the Minecraft code, called mods, add a variety of gameplay changes, ranging from new blocks, new items, new mobs to entire arrays of mechanisms to craft. The modding community is responsible for a substantial supply of mods from ones that enhance gameplay, such as minimaps, waypoints, and durability counters, to ones that add to the game elements from other video games and media. While a variety of mod frameworks were independently developed by reverse engineering the code, Mojang has also enhanced vanilla Minecraft with official frameworks for modification, allowing the production of community-created resource packs, which alter certain game elements including textures and sounds. Players can also create their own "maps" (custom world save files) which often contain specific rules, challenges, puzzles and quests, and share them for others to play. Mojang added an adventure mode in August 2012 and "command blocks" in October 2012, which were created specially for custom maps in Java Edition. Data packs, introduced in version 1.13 of the Java Edition, allow further customization, including the ability to add new advancements, dimensions, functions, loot tables, predicates, recipes, structures, tags, world generation settings, and biomes.
The Xbox 360 Edition supports downloadable content, which is available to purchase via the Xbox Games Store; these content packs usually contain additional character skins. It later received support for texture packs in its twelfth title update while introducing "mash-up packs", which combines texture packs with skin packs and changes to the game's sounds, music and user interface. The first mash-up pack (and by extension, the first texture pack) for the Xbox 360 Edition was released on 4 September 2013, and was themed after the Mass Effect franchise. Unlike Java Edition, however, the Xbox 360 Edition does not support player-made mods or custom maps. A cross-promotional resource pack based on the Super Mario franchise by Nintendo was released for the Wii U Edition worldwide on 17 May 2016. A mash-up pack based on Fallout was announced for release on the Wii U Edition. In April 2018, malware was discovered in several downloadable user-made Minecraft skins for use with the Java Edition of the game. Avast stated that nearly 50,000 accounts were infected, and when activated, the malware would attempt to reformat the user's hard drive. Mojang promptly patched the issue, and released a statement stating that "the code would not be run or read by the game itself", and would only run when the image containing the skin itself was opened.
In June 2017, Mojang released an update known as the "Discovery Update" to the Bedrock version of the game. The update includes a new map, a new game mode, the "Marketplace", a catalogue of user-generated content that gives Minecraft creators "another way to make a living from the game", and more.
Development
Before coming up with Minecraft, Markus "Notch" Persson was a game developer with King through March 2009, at the time serving mostly browser games, during which he learnt a number of different programming languages. He would prototype his own games during his off-hours at home, often based on inspiration he found from other games, and participated frequently on the TIGSource forums for independent developers. One of these personal projects was called "RubyDung", a base-building game inspired by Dwarf Fortress, but as an isometric three dimensional game like RollerCoaster Tycoon. He had already made a 3D texture mapper for another zombie game prototype he had started to try to emulate the style of Grand Theft Auto: Chinatown Wars. Among the features in "RubyDung" he explored was a first-person view similar to Dungeon Keeper but at the time, felt the graphics were too pixelated and omitted this mode. Around March 2009, Persson left King and joined jAlbum, but otherwise kept working on his prototypes.
Infiniminer, a block-based open-ended mining game first released in April 2009, sparked Persson's inspiration for how to take "RubyDung" forward. Infiniminer heavily influenced the visual style of gameplay, including bringing back the first-person mode, the "blocky" visual style and the block-building fundamentals. However, unlike Infiniminer, Persson wanted Minecraft to have RPG elements.
The original edition of Minecraft, now known as the Java Edition, was first developed in May 2009. Persson released a test video on YouTube of an early version of Minecraft. The base program of Minecraft was completed by Persson over a weekend in that month and a private testing was released on TigIRC on 16 May 2009. The game was first released to the public on 17 May 2009 as a developmental release on TIGSource forums. Persson updated the game based on feedback from the forums. This version later become known as the Classic version. Further developmental phases dubbed as Survival Test, Indev and Infdev were released in 2009 and 2010.
The first major update, dubbed Alpha, was released on 30 June 2010. Although Persson maintained a day job with Jalbum.net at first, he later quit in order to work on Minecraft full-time as sales of the alpha version of the game expanded. Persson continued to update the game with releases distributed to users automatically. These updates included new items, new blocks, new mobs, survival mode, and changes to the game's behavior (e.g. how water flows). To back the development of Minecraft, Persson set up a video game company, Mojang, with the money earned from the game. Mojang co-founders included Jakob Porser, one of Persson's coworkers from King, and Carl Manneh, jAlbum's CEO.
On 11 December 2010, Persson announced that Minecraft was entering its beta testing phase on 20 December 2010. He further stated that bug fixes and all updates leading up to and including the release would still be free. Over the course of the development, Mojang hired several new employees to work on the project.
Mojang moved the game out of beta and released the full version on 18 November 2011. On 1 December 2011, Jens "Jeb" Bergensten took full creative control over Minecraft, replacing Persson as lead designer. On 28 February 2012, Mojang announced that they had hired the developers of the popular "Bukkit" developer API for Minecraft, to improve Minecraft support of server modifications. This acquisition also included Mojang apparently taking full ownership of the CraftBukkit server mod which enables the use of Bukkit, although the validity of this claim was questioned due to its status as an open-source project with many contributors, licensed under the GNU General Public License and Lesser General Public License.
On 15 September 2014, Microsoft announced a $2.5 billion deal to buy Mojang, along with the ownership of the Minecraft intellectual property. The deal was suggested by Persson when he posted a tweet asking a corporation to buy his share of the game after receiving criticism for enforcing terms in the game's end user license agreement (EULA), which had been present in the EULA in the prior three years. According to Persson, Mojang CEO Carl Manneh received a call from a Microsoft executive shortly after the tweet, asking if Persson was serious about a deal. Mojang was also approached by other companies including Activision Blizzard and Electronic Arts. The deal with Microsoft was arbitrated on 6 November 2014, and led to Persson becoming one of Forbes "World's Billionaires".
Since the first full release of Minecraft, dubbed the "Adventure Update", the game has been continuously updated with many major updates, available for free to users who have already purchased the game. The most recent major update was "Caves & Cliffs Part II", which revamped both cave and mountain world generation, and was released on 30 November 2021. Another planned update, "The Wild Update", is set to be released in 2022.
The original version of the game was renamed to Minecraft: Java Edition on 18 September 2017 to separate it from Bedrock Edition, which was renamed to just Minecraft by the Better Together Update.
The Bedrock Edition has also been regularly updated, with these updates now matching the themes of Java Edition updates. Other versions of the game such as the various console editions and Pocket Edition were either merged into Bedrock and/or discontinued and as such have not received further updates.
On 16 April 2020, a beta version of Minecraft implementing physically based rendering, ray tracing and DLSS was released by Nvidia on RTX-enabled GPUs. The final version was released on 8 December 2020.
Minecraft: Pocket Edition
In August 2011, Minecraft: Pocket Edition was released for the Xperia Play on the Android Market as an early alpha version. It was then released for several other compatible devices on 8 October 2011. An iOS version of Minecraft was released on 17 November 2011. A port was made available for Windows Phones shortly after Microsoft acquired Mojang. The port concentrates on the creative building and the primitive survival aspect of the game, and does not contain all the features of the PC release. On his Twitter account, Jens Bergensten said that the Pocket Edition of Minecraft is written in C++ and not Java, due to iOS not being able to support Java.
On 10 December 2014, in observance of Mojang's acquisition by Microsoft, a port of Pocket Edition was released for Windows Phone 8.1. On 18 January 2017, Microsoft announced that it would no longer maintain the Windows Phone versions of Pocket Edition. On 19 December 2016, the full version of Minecraft: Pocket Edition was released on iOS, Android and Windows Phone.
Pocket Edition had been replaced by Minecraft: Bedrock Edition in 2017 enabling cross-platform play with Xbox One and Nintendo Switch editions.
Legacy console editions
An Xbox 360 version of the game, developed by 4J Studios, was released on 9 May 2012. On 22 March 2012, it was announced that Minecraft would be the flagship game in a new Xbox Live promotion called Arcade NEXT. The game differs from the home computer versions in a number of ways, including a newly designed crafting system, the control interface, in-game tutorials, split-screen multiplayer, and the ability to play with friends via Xbox Live. The worlds in the Xbox 360 version are also not "infinite", and are essentially barricaded by invisible walls. The Xbox 360 version was originally similar in content to older PC versions, but was gradually updated to bring it closer to the current PC version prior to its discontinuation. An Xbox One version featuring larger worlds among other enhancements was released on 5 September 2014.
Versions of the game for the PlayStation 3 and PlayStation 4 were released on 17 December 2013 and 4 September 2014 respectively. The PlayStation 4 version was announced as a launch title, though it was eventually delayed. A version for PlayStation Vita was also released in October 2014. Like the Xbox versions, the PlayStation versions were developed by 4J Studios.
On 17 December 2015, Minecraft: Wii U Edition was released. The Wii U version received a physical release on 17 June 2016 in North America, in Japan on 23 June 2016, and in Europe on 30 June 2016. A Nintendo Switch version of the game was released on the Nintendo eShop on 11 May 2017, along with a physical retail version set for a later date. During a Nintendo Direct presentation on 13 September 2017, Nintendo announced that Minecraft: New Nintendo 3DS Edition would be available for download immediately after the livestream, and a physical copy available on a later date. The game is only compatible with the "New" versions of the 3DS and 2DS systems, and does not work with the original 3DS, 3DS XL, or 2DS models.
On 20 September 2017, the Better Together Update was released on the Xbox One, Windows 10, VR, and mobile versions of the game, which used the Pocket Edition engine to enable cross-platform play between each of these versions. This version of the game eventually became known as the Bedrock Edition. Shortly after, the Bedrock Edition was also ported to the Nintendo Switch.
On 18 December 2018, the PlayStation 3, PlayStation Vita, Xbox 360, and Wii U versions of Minecraft received their final update and would later become known as Legacy Console Editions.
The PlayStation 4 version of Minecraft was updated in December 2019 and became part of the Bedrock edition, which enabled cross-platform play for users with a free Xbox Live account.
Minecraft: Education Edition
Minecraft: Education Edition is an educational version of the base game, designed specifically for use in educational establishments such as schools, and built off of the Bedrock codebase. It is available on Windows 10, MacOS, iPadOS and Chrome OS. It includes a Chemistry Resource Pack, free lesson plans on the Minecraft: Education Edition website, and two free companion applications: Code Connection and Classroom Mode.
An initial beta test was carried out between 9 June and 1 November 2016. The full game was then released on Windows 10 and MacOS on 1 November 2016. On 20 August 2018, Mojang Studios announced that it would bring Education Edition to iPadOS in Autumn 2018. It was released to the App Store on 6 September 2018. On 27 March 2019, it was announced that the Education Edition would be operated by JD.com in China. On 26 June 2020, an Education Edition Public Beta was made available to Google Play Store compatible Chromebooks. The full game was released to the Google Play Store for Chromebooks on 7 August 2020.
Minecraft China
On 20 May 2016, Minecraft China was announced as a localized edition for China, where it was released under a licensing agreement between NetEase and Mojang. The PC edition was released for public testing on 8 August 2017. The iOS version was released on 15 September 2017, and the Android version was released on 12 October 2017. The PC edition is based on the original Java Edition, while the iOS and Android mobile version is based on the Bedrock Edition. The edition is free-to-play, and had over 300 million players by November 2019.
Other PC versions
Apart from Minecraft: Java Edition, there are other versions of Minecraft for PC, including Minecraft for Windows 10, Minecraft Classic, Minecraft 4K, and a version for the Raspberry Pi.
Minecraft for Windows 10
Minecraft for Windows 10 is exclusive to Microsoft's Windows 10 operating system. The beta release for this version launched on the Windows Store on July 29, 2015.
After nearly one and a half years in beta, Microsoft released Minecraft for Windows 10 into version 1.0 on December 19, 2016. Called the "Ender Update," this release implemented new features to this version of Minecraft like world templates and add-on packs.
This version has the ability to play with Xbox Live friends, and to play local multiplayer with owners of Minecraft on other Bedrock platforms. Other features include the ability to use multiple control schemes such as a gamepad, keyboard, or touchscreen (for Microsoft Surface and other touchscreen-enabled devices). Virtual reality support has been implemented, as well as the ability to record and take screenshots in-game via the Windows built-in GameDVR.
Minecraft 4K
Minecraft 4K is a simplified version of Minecraft similar to the Classic version that was developed for the Java 4K game programming contest "in way less than 4 kilobytes". The map itself is finite—composed of 64×64×64 blocks—and the same world is generated every time. Players are restricted to placing or destroying blocks, which consist of grass, dirt, stone, wood, leaves, and brick.
Raspberry Pi
A version of Minecraft for the Raspberry Pi was officially revealed at Minecon 2012. The Pi Edition is based on Pocket Edition Alpha v0.6.1, and with the added ability of using text commands to edit the game world. Players can open the game code and use the Python programming language to manipulate things in the game world. It also includes a scripting API to modify the game, and server software for multiplayer. The game was leaked on 20 December 2012, but was quickly pulled off. It was officially released on 11 February 2013. Mojang stopped providing updates to Minecraft: Raspberry Pi Edition in 2016. It is preinstalled on Raspberry Pi OS and can be downloaded for free from the official Minecraft website.
Music
Minecraft music and sound effects were produced by German musician Daniel Rosenfeld, better known as C418. The background music in Minecraft is instrumental ambient music. On 4 March 2011, Rosenfeld released a soundtrack titled Minecraft – Volume Alpha; it includes most of the tracks featured in Minecraft, as well as other music not featured in the game. Kirk Hamilton of Kotaku chose the music in Minecraft as one of the best video game soundtracks of 2011. On 9 November 2013, Rosenfeld released the second official soundtrack, titled Minecraft – Volume Beta, which includes the music that was added in later versions of the game. A physical release of Volume Alpha, consisting of CDs, black vinyl, and limited-edition transparent green vinyl LPs, was issued by indie electronic label Ghostly International on 21 August 2015. In addition to Rosenfeld's work, other composers have contributed tracks to the game since release, including Samuel Åberg, Gareth Coker, Lena Raine, and Kumi Tanioka.
Variants
For the tenth anniversary of the game's release, Mojang remade a version of Minecraft Classic in JavaScript and made it available to play online. It functions much the same as creative mode, allowing players to build and destroy any and all parts of the world either alone or in a multiplayer server. Environmental hazards such as lava do not damage players, and some blocks function differently since their behavior was later changed during development.
Around 2011, prior to Minecrafts full release, there had been collaboration between Mojang and The Lego Group to make a Lego brick-based Minecraft game to be called Brickcraft. This would have modified the base Minecraft game to use Lego bricks, which meant adapting the basic 1x1 block to account for larger pieces typically used in Lego sets. Persson had worked on the preliminary version of this game, which he had named "Project Rex Kwon Do" based on the joke from Napoleon Dynamite. Lego had greenlit the project to go forward, and while Mojang had put two developers on the game for six months, they later opted to cancel the project, as Mojang felt that the Lego Group were too demanding on what they could do, according to Mojang's Daniel Kaplan. The Lego Group had considered buying out Mojang to complete the game, but at this point Microsoft made its offer to buy the company for over $2 billion. According to the Lego Group's Ronny Scherer, the company was not yet sure of the potential success of Minecraft at this point and backed off from acquisition after Microsoft brought this offer to Mojang.
Virtual reality
Early on, Persson planned to support the Oculus Rift with a port of Minecraft. However, after Facebook acquired Oculus in 2013, he abruptly canceled plans noting "Facebook creeps me out." A community-made modification known as Minecraft VR was developed in 2016 to provide virtual reality support to Minecraft: Java Edition oriented towards Oculus Rift hardware. A fork of the Minecraft VR modification known as Vivecraft ported the mod to OpenVR, and is oriented towards supporting HTC Vive hardware. On 15 August 2016, Microsoft launched official Oculus Rift support for Minecraft on Windows 10. Upon its release, the Minecraft VR mod was discontinued by its developer due to trademark complaints issued by Microsoft, and Vivecraft was endorsed by the community makers of the Minecraft VR modification due to its Rift support and being superior to the original Minecraft VR mod. Also available is a Gear VR version, titled Minecraft: Gear VR Edition. Windows Mixed Reality support was added in 2017. On 7 September 2020, Mojang Studios announced that the PlayStation 4 version of the game would be getting PlayStation VR support in the same month. The only officially supported VR versions of Minecraft are the PlayStation 4 version, Minecraft: Gear VR Edition and Minecraft for Windows 10 for Oculus Rift and Windows Mixed Reality headsets.
Reception
Early versions of Minecraft received critical acclaim, praising the creative freedom it grants players in-game, as well as the ease of enabling emergent gameplay. Critics have praised Minecraft complex crafting system, commenting that it is an important aspect of the game's open-ended gameplay. Most publications were impressed by the game's "blocky" graphics, with IGN describing them as "instantly memorable". Reviewers also liked the game's adventure elements, noting that the game creates a good balance between exploring and building. The game's multiplayer feature has been generally received favorably, with IGN commenting that "adventuring is always better with friends". Jaz McDougall of PC Gamer said Minecraft is "intuitively interesting and contagiously fun, with an unparalleled scope for creativity and memorable experiences". It has been regarded as having introduced millions of children to the digital world, insofar as its basic game mechanics are logically analogous to computer commands.
IGN was disappointed about the troublesome steps needed to set up multiplayer servers, calling it a "hassle". Critics also said that visual glitches occur periodically. Despite its release out of beta in 2011, GameSpot said the game had an "unfinished feel", adding that some game elements seem "incomplete or thrown together in haste".
A review of the alpha version, by Scott Munro of the Daily Record, called it "already something special" and urged readers to buy it. Jim Rossignol of Rock, Paper, Shotgun also recommended the alpha of the game, calling it "a kind of generative 8-bit Lego Stalker". On 17 September 2010, gaming webcomic Penny Arcade began a series of comics and news posts about the addictiveness of the game. The Xbox 360 version was generally received positively by critics, but did not receive as much praise as the PC version. Although reviewers were disappointed by the lack of features such as mod support and content from the PC version, they acclaimed the port's addition of a tutorial and in-game tips and crafting recipes, saying that they make the game more user-friendly.
Minecraft: Pocket Edition initially received mixed reviews from critics. Although reviewers appreciated the game's intuitive controls, they were disappointed by the lack of content. The inability to collect resources and craft items, as well as the limited types of blocks and lack of hostile mobs, were especially criticized. After updates added more content, Pocket Edition started receiving more positive reviews. Reviewers complimented the controls and the graphics, but still noted a lack of content.
Sales
Minecraft surpassed over a million purchases less than a month after entering its beta phase in early 2011. At the same time, the game had no publisher backing and has never been commercially advertised except through word of mouth, and various unpaid references in popular media such as the Penny Arcade webcomic. By April 2011, Persson estimated that Minecraft had made €23 million (US$33 million) in revenue, with 800,000 sales of the alpha version of the game, and over 1 million sales of the beta version. In November 2011, prior to the game's full release, Minecraft beta surpassed 16 million registered users and 4 million purchases. By March 2012, Minecraft had become the 6th best-selling PC game of all time. , the game has sold 17 million copies on PC, becoming the best-selling PC game of all time. , the game has sold approximately 60 million copies across all platforms, making it the best-selling video game of all time. On 25 February 2014, the game reached 100 million registered users. By May 2019, 180 million copies had been sold across all platforms, making it the single best-selling video game of all time. The free-to-play Minecraft China version had over 300 million players by November 2019. By April 2021, Minecraft sold more than 238 million copies worldwide.
The Xbox 360 version of Minecraft became profitable within the first day of the game's release in 2012, when the game broke the Xbox Live sales records with 400,000 players online. Within a week of being on the Xbox Live Marketplace, Minecraft sold upwards of a million copies. GameSpot announced in December 2012 that Minecraft sold over 4.48 million copies since the game debuted on Xbox Live Arcade in May 2012. In 2012, Minecraft was the most purchased title on Xbox Live Arcade; it was also the fourth most played title on Xbox Live based on average unique users per day. , the Xbox 360 version has sold 12 million copies. In addition, Minecraft: Pocket Edition has reached a figure of 21 million in sales. The PlayStation 3 version sold one million copies in five weeks. The release of the game's PlayStation Vita version boosted Minecraft sales by 79%, outselling both PS3 and PS4 debut releases and becoming the largest Minecraft launch on a PlayStation console. The PS Vita version sold 100,000 digital copies in Japan within the first two months of release, according to an announcement by SCE Japan Asia. By January 2015, 500,000 digital copies of Minecraft were sold in Japan across all PlayStation platforms, with a surge in primary school children purchasing the PS Vita version. Minecraft helped improve Microsoft's total first-party revenue by $63 million for the 2015 second quarter.
The game, including all of its versions, had over 112 million monthly active players by September 2019. On its 11th anniversary in May 2020, the company announced that Minecraft had reached over 200 million copies sold across platforms with over 126 million monthly active players. By April 2021, the number of active monthly users had climbed to 140 million.
Awards
In July 2010, PC Gamer listed Minecraft as the fourth-best game to play at work. In December of that year, Good Game selected Minecraft as their choice for Best Downloadable Game of 2010, Gamasutra named it the eighth best game of the year as well as the eighth best indie game of the year, and Rock, Paper, Shotgun named it the "game of the year". Indie DB awarded the game the 2010 Indie of the Year award as chosen by voters, in addition to two out of five Editor's Choice awards for Most Innovative and Best Singleplayer Indie. It was also awarded Game of the Year by PC Gamer UK. The game was nominated for the Seumas McNally Grand Prize, Technical Excellence, and Excellence in Design awards at the March 2011 Independent Games Festival and won the Grand Prize and the community-voted Audience Award. At Game Developers Choice Awards 2011, Minecraft won awards in the categories for Best Debut Game, Best Downloadable Game and Innovation Award, winning every award for which it was nominated. It also won GameCity's video game arts award. On 5 May 2011, Minecraft was selected as one of the 80 games that would be displayed at the Smithsonian American Art Museum as part of The Art of Video Games exhibit that opened on 16 March 2012. At the 2011 Spike Video Game Awards, Minecraft won the award for Best Independent Game and was nominated in the Best PC Game category. In 2012, at the British Academy Video Games Awards, Minecraft was nominated in the GAME Award of 2011 category and Persson received The Special Award. In 2012, Minecraft XBLA was awarded a Golden Joystick Award in the Best Downloadable Game category, and a TIGA Games Industry Award in the Best Arcade Game category. In 2013 it was nominated as the family game of the year at the British Academy Video Games Awards. Minecraft Console Edition won the award for TIGA Game Of The Year in 2014. In 2015, the game placed 6th on USgamer The 15 Best Games Since 2000 list. In 2016, Minecraft placed 6th on Time's The 50 Best Video Games of All Time list.
Minecraft was nominated for the 2013 Kids' Choice Awards for Favorite App, but lost to Temple Run. It was nominated for the 2014 Kids' Choice Awards for Favorite Video Game, but lost to Just Dance 2014. The game later won the award for the Most Addicting Game at the 2015 Kids' Choice Awards. In addition, the Java Edition was nominated for "Favorite Video Game" at the 2018 Kids' Choice Awards, while the game itself won the "Still Playing" award at the 2019 Golden Joystick Awards, as well as the "Favorite Video Game" award at the 2020 Kids' Choice Awards.
Cultural impact
In September 2019, The Guardian classified Minecraft as the best video game of (the first two decades of) the 21st century, and in November 2019
Polygon called the game the "most important game of the decade" in its 2010s "decade in review". In December 2019, Forbes gave Minecraft a special mention in a list of the best video games of the 2010s, stating that the game is "without a doubt one of the most important games of the last ten years." In June 2020, Minecraft was inducted into the World Video Game Hall of Fame.
Minecraft is recognized as one of the first successful games to use an early access model to draw in sales prior to its full release version to help fund development. As Minecraft helped to bolster indie game development in the early 2010s, it also helped to popularize the use of the early access model in indie game development.
Social media sites such as YouTube, Facebook, and Reddit played a significant role in popularizing Minecraft. Research conducted by the University of Pennsylvania's Annenberg School of Communication showed that one-third of Minecraft players learned about the game via Internet videos. In 2010, Minecraft-related videos began to gain influence on YouTube, often made by commentators. The videos usually contain screen-capture footage of the game and voice-overs. Common coverage in the videos includes creations made by players, walkthroughs of various tasks, and parodies of works in popular culture. By May 2012, over four million Minecraft-related YouTube videos had been uploaded. Some popular commentators have received employment at Machinima, a gaming video company that owns a highly watched entertainment channel on YouTube. The Yogscast is a British company that regularly produces Minecraft videos; their YouTube channel has attained billions of views, and their panel at MineCon 2011 had the highest attendance. Other well known YouTube personnel include Jordan Maron, who has created many Minecraft parodies, including "Minecraft Style", a parody of the internationally successful single "Gangnam Style" by South Korean rapper Psy. YouTube announced that on December 14, 2021, the total amount of Minecraft-related views exceeded one trillion since the game's inception in 2009.
"Herobrine" is an urban legend associated with Minecraft, who first appeared as a single image on 4chan's /v/ board. According to rumors, Herobrine appears in players' worlds and builds strange constructions. However, Mojang has confirmed that Herobrine has never existed in Minecraft, and there are no plans to add Herobrine.
Minecraft has been referenced by other video games, such as Torchlight II, Borderlands 2, Choplifter HD, Super Meat Boy, The Elder Scrolls V: Skyrim, The Binding of Isaac, The Stanley Parable, FTL: Faster Than Light, and Super Smash Bros. Ultimate, the lattermost of which features a downloadable character and stage based on Minecraft. It was also referenced by electronic music artist deadmau5 in his performances. A simulation of the game was featured in Lady Gaga's "G.U.Y." music video. The game is also referenced heavily in "Informative Murder Porn", the second episode of the seventeenth season of the animated television series South Park. "Luca$", the seventeenth episode of the 25th season of the animated sitcom The Simpsons, and "Minecraft is for Everyone" by Starbomb was inspired by Minecraft.
Due to the rapid development of Minecraft, many individual versions of the game have been lost to time. A community group named Omniarchive aims to archive these lost versions and have successfully found many of them.
Applications
The possible applications of Minecraft have been discussed extensively, especially in the fields of computer-aided design and education. In a panel at MineCon 2011, a Swedish developer discussed the possibility of using the game to redesign public buildings and parks, stating that rendering using Minecraft was much more user-friendly for the community, making it easier to envision the functionality of new buildings and parks. In 2012, a member of the Human Dynamics group at the MIT Media Lab, Cody Sumter, said: "Notch hasn't just built a game. He's tricked 40 million people into learning to use a CAD program." Various software has been developed to allow virtual designs to be printed using professional 3D printers or personal printers such as MakerBot and RepRap.
In September 2012, Mojang began the Block By Block project in cooperation with UN Habitat to create real-world environments in Minecraft. The project allows young people who live in those environments to participate in designing the changes they would like to see. Using Minecraft, the community has helped reconstruct the areas of concern, and citizens are invited to enter the Minecraft servers and modify their own neighborhood. Carl Manneh, Mojang's managing director, called the game "the perfect tool to facilitate this process", adding "The three-year partnership will support UN-Habitat's Sustainable Urban Development Network to upgrade 300 public spaces by 2016." Mojang signed Minecraft building community, FyreUK, to help render the environments into Minecraft. The first pilot project began in Kibera, one of Nairobi's informal settlements, and is in the planning phase. The Block By Block project is based on an earlier initiative started in October 2011, Mina Kvarter (My Block), which gave young people in Swedish communities a tool to visualize how they wanted to change their part of town. According to Manneh, the project was a helpful way to visualize urban planning ideas without necessarily having a training in architecture. The ideas presented by the citizens were a template for political decisions.
In 2013, Stuart Duncan, known online as AutismFather, started a server for autistic children and their families, called Autcraft. The server was created because the public servers had many bullies and trolls that made the autistic kids angry and feel hurt. It was constantly monitored to help players and prevent bullying. The server had a whitelist that only allowed approved players, of which there are 8,000 players worldwide in 2017. The server had a unique ranking system based on the attributes of the player, offering titles such as "Player of the Week" and "Caught Being Awesome". The server was called "one of the best places on the Internet" and was a subject of a research paper.
In April 2014, the Danish Geodata Agency generated all of Denmark in fullscale in Minecraft based on their own geodata. This is possible because Denmark is one of the flattest countries with the highest point at (ranking as the country with the 30th smallest elevation span), where the limit in default Minecraft is around above in-game sea level.
Taking advantage of the game's accessibility where other websites are censored, the non-governmental organization Reporters Without Borders have used an open Minecraft server to create the Uncensored Library, a repository within the game of journalism by authors from countries (including Egypt, Mexico, Russia, Saudi Arabia and Vietnam) who have been censored and arrested, such as Jamal Khashoggi. The neoclassical virtual building was created over about 250 hours by an international team of 24 people.
Despite its unpredictable nature, Minecraft has also become a popular game for speedrunning, where players time themselves from being dropped into a new world generated by a random seed to reaching "The End", and defeat the boss known as the "Ender Dragon". While some speedrunners seek to get the fastest time possible and relying on luck of the seed to optimize conditions, others look to repeat this process consistently as to maintain a comparatively fast average completion time across all runs.
Education
Minecraft has also been used in educational settings. In 2011, an educational organization named MinecraftEdu was formed with the goal of introducing Minecraft into schools. The group works with Mojang to make the game affordable and accessible to schools. The version of Minecraft through MinecraftEDU includes unique features to allow teachers to monitor the students' progress within the virtual world, such as receiving screenshots from students to show completion of a lesson. In September 2012, MinecraftEdu said that approximately 250,000 students around the world have access to Minecraft through the company. A wide variety of educational activities involving the game have been developed to teach students various subjects, including history, language arts and science. For an example, one teacher built a world consisting of various historical landmarks for students to learn and explore. Another teacher created a large-scale representation of an animal cell within Minecraft that student could explore and learn how the cell functions work. Great Ormond Street Hospital has been recreated in Minecraft, and it proposed that patients can use it to virtually explore the hospital before they actually visit. Minecraft may also prove as an innovation in Computer Aided Design (CAD). Minecraft offers an outlet of collaboration in design and could have an impact on the industry.
With the introduction of redstone blocks to represent electrical circuits, users have been able to build functional virtual computers within Minecraft. Such virtual creations include a working hard drive, an 8-bit virtual computer, and emulators for the Atari 2600 (including one by YouTube personality SethBling) and Game Boy Advance. In at least one instance, a mod has been created to use this feature to teach younger players how to program within a language set by the virtual computer within a Minecraft world.
In September 2014, the British Museum in London announced plans to recreate its building along with all exhibits in Minecraft in conjunction with members of the public.
Microsoft and non-profit Code.org had teamed up to offer Minecraft-based games, puzzles, and tutorials aimed to help teach children how to program; by March 2018, Microsoft and Code.org reported that more than 85 million children have used their tutorials.
Clones
After the release of Minecraft, other video games were released with various similarities to Minecraft, and some were described as being "clones". Examples include Ace of Spades, CastleMiner, CraftWorld, FortressCraft, Terraria, and Total Miner. David Frampton, designer of The Blockheads, reported that one failure of his 2D game was the "low resolution pixel art" that too closely resembled the art in Minecraft, which resulted in "some resistance" from fans. A homebrew adaptation of the alpha version of Minecraft for the Nintendo DS, titled DScraft, has been released; it has been noted for its similarity to the original game considering the technical limitations of the system.
In response to Microsoft's acquisition of Mojang and their Minecraft IP, various developers announced further clone titles developed specifically for Nintendo's consoles, as they were the only major platforms to not officially receive Minecraft at the time. These clone titles include UCraft (Nexis Games), Cube Life: Island Survival (Cypronia), Discovery (Noowanda), Battleminer (Wobbly Tooth Games), Cube Creator 3D (Big John Games), and Stone Shire (Finger Gun Games). Despite this, the fears of fans were unfounded with official Minecraft releases on Nintendo consoles eventually resuming.
Markus Persson made another similar game, Minicraft, for a Ludum Dare competition in 2011.
Minecon
Minecon is the official fan convention dedicated to Minecraft annually. The first one was held in November 2011 at the Mandalay Bay Hotel and Casino in Las Vegas. The event included the official launch of Minecraft; keynote speeches, including one by Persson; building and costume contests; Minecraft-themed breakout classes; exhibits by leading gaming and Minecraft-related companies; commemorative merchandise; and autograph and picture times with Mojang employees and well-known contributors from the Minecraft community.
Notes
References
Further reading
External links
Minecraft Wiki
2011 video games
Android (operating system) games
Construction and management simulation games
Early access video games
Independent Games Festival winners
Indie video games
IOS games
Java platform games
Linux games
MacOS games
Microsoft games
Multiplayer and single-player video games
New Nintendo 3DS games
Nintendo 3DS eShop games
Nintendo Network games
Nintendo Switch games
Open-world video games
PlayStation 3 games
PlayStation 4 games
PlayStation Network games
PlayStation Vita games
Survival video games
Video games developed in Sweden
Video games with cross-platform play
Video games with user-generated gameplay content
Video games with voxel graphics
Video games using procedural generation
Video games scored by Gareth Coker
Video games scored by Lena Raine
Video games scored by Kumi Tanioka
Virtual reality games
Virtual world communities
Wii U eShop games
Wii U games
Windows games
Windows Phone games
Xbox 360 games
Xbox 360 Live Arcade games
Xbox One games |
27158930 | https://en.wikipedia.org/wiki/Convio | Convio | Convio is a software company based in Austin, Texas in the USA, with offices in Washington, DC and Emeryville, CA. Convio provided internet marketing and business management applications tailored specifically for non-profit organizations, and virtually all of its customers were charities, educational establishments, and political advocacy groups. Convio was acquired by Blackbaud in May 2012 for $325 million.
Early history
Convio was founded in November 1999 by Vinay Bhagat and Dave Crooke, using venture capital funding led by Austin Ventures. The inspiration for the company was the inefficient pen and paper administration of telethons then used by PBS and NPR stations to raise funds from the public. In contrast to many of its then competitors, who focused merely on facilitating donations via credit card transactions on the internet, the vision for Convio was to empower the non-profit sector to make use of the internet by building a commoditized suite of software tools to allow charities and other non-profit organizations to cultivate relationships with their supporters and other constituents via the Internet, and with the goal that these tools could be used directly by the organizations' communications professionals, with the simplicity of desktop word processing software, rather than only being usable by webmasters and IT administrators. The original "placeholder" name of Convio was ShowSupport.com, to evoke the idea of people showing support for causes they believe in.
The name "Convio" (pronounced con-VEE-oh) is false Latin for "with vision" and was chosen to be abstract, short, memorable, and because the internet domain name convio.com was not previously registered. The company logo is intended to represent both a human eye for the concept of vision, and an enthusiastic caryatid with upraised arms for the concept of people giving their support.
Because of the high complexity of operating Internet server systems, and the relatively low levels of IT staffing that non-profit organizations can afford compared to commercial companies, Convio made the then-bold decision to be one of the first companies to operate its software products solely on its own infrastructure, a business model now known as "software as a service" (SaaS). Customers of SaaS companies do not receive copies of the software to run on their own computers, but instead access the software over the Internet using a web browser.
In January 2007, Convio acquired GetActive Software, then the second largest eCRM provider for non-profit organizations in the USA. GetActive Software was founded by Bill Pease, Sheeraz Haji, Tom Krackeler, Ken Leiserson and other former staff members at the Environmental Defense Fund who had been doing work within that organization to enable internet activism via the then nascent World Wide Web. They realized that many non-profit organizations could benefit from the same technology, and set up a spin-off commercial software company to further develop and distribute it, with Environmental Defense as a shareholder. The original "placeholder" name of GetActive was Locus Pocus, a pun on the concept of using address mapping technology to connect a visitor to a web site with the political representatives for their district.
Convio had its IPO on April 29, 2010 on the NASDAQ.
Convio was acquired by Blackbaud in May 2012 for $325 million.
Controversy
From 2005-2007, Convio was the target of a boycott campaign led by well known left political bloggers John Aravosis and Markos Moulitsas on their respective blogs Americablog and Daily Kos because of the company's stance in serving clients with views from across the political spectrum.
At least one of the blogs may have softened its stance, with the Daily Kos blog linking to political advocacy campaigns powered by Convio's software.
Acquisition
On January 17, 2012, Convio announced that it had entered into an agreement to be acquired by Blackbaud, Inc. (NASDAQ: BLKB) at a price of $16.00 per share, a 49% premium over its market price.
On May 7, 2012, the acquisition was completed, and Convio became fully owned by Blackbaud.
References
External links
Companies based in Austin, Texas
CRM software companies
Companies formerly listed on the Nasdaq |
168680 | https://en.wikipedia.org/wiki/Teradata | Teradata | Teradata Corporation is an American software company that provides database and analytics-related software, products, and services. The company was formed in 1979 in Brentwood, California, as a collaboration between researchers at Caltech and Citibank's advanced technology group.
Overview
Teradata is an enterprise software company that develops and sells database analytics software subscriptions. The company provides three main services: business analytics, cloud products, and consulting. It operates in North and Latin America, Europe, the Middle East, Africa and Asia.
Teradata is headquartered in San Diego, California, and has additional major U.S. locations in Atlanta and San Francisco, where its data center research and development is housed. It is publicly traded on the New York Stock Exchange (NYSE) under the stock symbol TDC. Steve McMillan has served as the company's president and chief executive officer since 2020. The company reported $1.836 billion in revenue, with a net income of $129 million, and 8,535 employees globally, as of February 9, 2020.
History
The concept of Teradata grew from research at California Institute of Technology and from the discussions of Citibank's advanced technology group in the 1970s. In 1979, the company was incorporated in Brentwood, California by Jack E. Shemer, Philip M. Neches, Walter E. Muir, Jerold R. Modes, William P. Worth, Carroll Reed and David Hartke. Teradata released its DBC/1012 specialized database computer in 1984. In 1990, the company acquired Sharebase, originally named Britton Lee. In September 1991, AT&T Corporation acquired NCR Corporation, which announced the acquisition of Teradata for about $250 million in December. Teradata built the first system, over 1 terabyte, for Wal-Mart in 1992.
NCR acquired Strategic Technologies & Systems in 1999, and appointed Stephen Brobst as chief technology officer of Teradata Solutions Group. In 2000, NCR acquired Ceres Integrated Solutions and its customer relationship management software for $90 million, as well as Stirling Douglas Group and its demand chain management software. Teradata acquired financial management software from DecisionPoint in 2005. In January 2007, NCR announced Teradata would become an independent public company, led by Michael F. Koehler. The new company's shares started trading in October.
In April 2016, a product line called IntelliFlex was announced. Victor L. Lund became the chief executive on May 5, 2016.
On May 7, 2020, Teradata announced the appointment of Steve McMillan as president and chief executive officer, effective June 8, 2020
Acquisitions and divestitures
Teradata acquired several companies since becoming an independent public company in 2008. In March 2008, Teradata acquired professional services company Claraview, which previously had spun out software provider Clarabridge. Teradata acquired column-oriented DBMS vendor Kickfire in August 2010, followed by the marketing software company Aprimo for about $550 million in December. In March 2011, the company acquired Aster Data Systems for about $263 million. Teradata acquired email direct marketing company eCircle in May 2012, which was merged into the Aprimo business.
In 2014, Teradata acquired the assets of Revelytix, a provider of information management products and for a reported $50 million. In September, Teradata acquired Hadoop service firm Think Big Analytics. In December, Teradata acquired RainStor, a company specializing in online data archiving on Hadoop. Teradata acquired Appoxxee, a mobile marketing software as a service provider, for about $20 million in January 2015, followed by the Netherlands-based digital marketing company FLXone in September.
In July 2016, the marketing applications division, using the Aprimo brand, was sold to private equity firm Marlin Equity Partners for about $90 million with Aprimo under CEO John Stammen moving its headquarters to Chicago. whilst absorbing Revenew Inc. that had also been bought by Marlin.
Teradata acquired Big Data Partnership, a service company based in the UK, on July 21, 2016. In July 2017, Teradata acquired StackIQ, maker of the Stacki software.
Technology and products
Teradata offers three main services to its customers: cloud and hardware-based data warehousing, business analytics, and consulting services. In September 2016, the company launched Teradata Everywhere, which allows users to submit queries against public and private databases. The service uses massively parallel processing across both its physical data warehouse and cloud storage, including managed environments such as Amazon Web Services, Microsoft Azure, VMware, and Teradata's Managed Cloud and IntelliFlex. Teradata offers customers both hybrid cloud and multi-cloud storage. In March 2017, Teradata introduced Teradata IntelliCloud, a secure managed cloud for data and analytic software as a service. IntelliCloud is compatible with Teradata's data warehouse platform, IntelliFlex. The Teradata Analytics Platform was unveiled in 2017.
Big data
Teradata began to use the term "big data" in 2010. CTO Stephen Brobst attributed the rise of big data to "new media sources, such as social media." The increase in semi-structured and unstructured data gathered from online interactions prompted Teradata to form the "Petabyte club" in 2011 for its heaviest big data users.
The rise of big data resulted in many traditional data warehousing companies updating their products and technology. For Teradata, big data prompted the acquisition of Aster Data Systems in 2011 for the company's MapReduce capabilities and ability to store and analyze semi-structured data.
Vantage
In October 2018 Teradata started calling the product line Vantage in their advertising. Vantage was planned as various analytics engines on a core relational database, including the Aster graph database, and a machine learning engine. Spark and TensorFlow engines were announced as planned but never released.
Vantage can be deployed across public clouds, on-premises, and commodity infrastructure. Vantage provides storage and analysis for multi-structured data formats.
References
External links
Companies listed on the New York Stock Exchange
Data warehousing products
Companies based in San Diego
Software companies based in California
NCR Corporation
Big data companies
Computer companies established in 1979
Software companies established in 1979
American companies established in 1979
1979 establishments in California
Corporate spin-offs
Computer companies of the United States
Software companies of the United States
2007 initial public offerings |
8522456 | https://en.wikipedia.org/wiki/Tianhua%20GX-1C | Tianhua GX-1C | The Sinomanic Tianhua GX-1C is a specially tailored subnotebook for primary and secondary school students in the People's Republic of China. It uses the Loongson I (Longxin) CPU. The device is designed for use as an educational aid and to introduce young students to computers.
History
Sichuan-based Sinomanic Co., Ltd., launched four models of low-cost personal computers in 2006. Sinomanic is the second maker of Loongson-based personal computers in mainland China. The firm has announced a sales target of 500,000 to 1 million units for their Tianhua GX-1C model. The GX-1C uses a Loongson I (Longxin) CPU similar to that used in the commercially distributed Longmeng Municator from a Chinese startup named Yellow Sheep River. Loongson (Longxin) translates as "Dragon Chip".
Sinomanic has created four different models for distribution to specific markets. The Tianhua GX-1 and Tianhua GX-1C are marketed for education. The Tian Yan GX-2 is a rural computer marketed for farmers, Tian Yan is designed to be used with a television instead of a standard computer monitor, much like the Commodore 64 and Amiga. the operating system for the Tian Yan is tailored for use with a television display. The Tianlong GX-3 is a more robust machine marketed for business. The Tiansheng GX-4 with slightly less memory and processor speeds that vary between 400 MHz and 600 MHz marketed for multimedia users. All desktop units support VGA and television video output.
Sinomanic model comparison table
Prices of desktop models do not include displays.
Technology
Hardware GX-1C
The reference hardware specifications as of 28 October 2006 are:
CPU: Loongson 1 (Godson) 32KB cache level RISC instruction set 32-bit CPU – GS32I
Clock speed: 400 MHz
Display: LCD 8.4 inch 1280 x 1024–24 (TFT 24-bit color) Xiancun
SDRAM: 128MB PC100 MCom
Hard disk: IDE notebook 40GB
IDE controller: 32-bit PCI IT8212, two IDE channels, supports four IDE devices
RAID controller: supports PIO Mode 0–4, DMA mode 0–2, Ultra DMA Mode 0–6, embedded CPU RAID function
Ethernet: 10/100M
Modem port: ADSL
Host interface: integrated USB 1.1
Audio: AC'97 2.2 18 48 kHz (maximum sampling frequency which supports voice communication); stereo 2 channel, jacks for external stereo speakers and microphones, line-out, and mic-in
Speakers: stereo built in
Keyboard: integrated
Power source: unspecified notebook power supply
Software
According to a translated FAQ on the Sinomanic website and an article on Sanhaostreet.com, Sinomanic technicians were having trouble getting Loongson's proprietary RISC instruction sets to work with Debian Linux and Windows CE. However, the use of the MIPS architecture allowed them to correct many of their compatibility issues.
Versions of Debian Linux and Windows CE will be available for the units that ship initially. But because these operating systems cannot be optimized for Loongson's unique RISC instruction set, Sinomanic has continued development on their own second generation microkernel operating system codenamed Future Alpha. Future Alpha has apparently been customized for compatibility with the 32-bit Loongson I (Longxin) processor. And Sinomanic recently tested an updated version of Future Alpha, for the 64-bit Loongson II microprocessor. The company said that the 64-bit version of their Future Alpha operating system had successfully passed testing on January 19, 2007.
In a press release dated 3 March 2007, Sinomanic demonstrated a custom version of Debian Linux running on the Tian Yan GX-2 their rural computer. The unit appeared to be using GNOME as its graphical user interface. Software applications shown were: the Mozilla Firefox browser, an unnamed text editor, Pidgin instant messenger, a PDF reader, and the Evolution email client.
See also
Comparison of netbooks a comparison including similar models
One Laptop per Child, OLPC
Digital Textbook a South Korean Project that intends to distribute tablet notebooks to elementary school students.
Lemote also called Dragon Dream is a low-cost computer designed and made in China
VIA OpenBook, a project functionally similar to the OLPC
Simputer is an earlier project to construct cheap handheld computers in India
VIA pc-1 Initiative a project of VIA Technologies to help bridge the digital divide.
Edubuntu: A free Linux distribution designed specifically for use in schools and home classrooms
References
Laptops
Linux-based devices
Science and technology in the People's Republic of China |
33515297 | https://en.wikipedia.org/wiki/Duqu | Duqu | Duqu is a collection of computer malware discovered on 1 September 2011, thought to be related to the Stuxnet worm and to have been created by Unit 8200. Duqu has exploited Microsoft Windows's zero-day vulnerability. The Laboratory of Cryptography and System Security (CrySyS Lab) of the Budapest University of Technology and Economics in Hungary discovered the threat, analysed the malware, and wrote a 60-page report naming the threat Duqu. Duqu got its name from the prefix "~DQ" it gives to the names of files it creates.
Nomenclature
The term Duqu is used in a variety of ways:
Duqu malware is a variety of software components that together provide services to the attackers. Currently this includes information stealing capabilities and in the background, kernel drivers and injection tools. Part of this malware is written in unknown high-level programming language, dubbed "Duqu framework". It is not C++, Python, Ada, Lua and many other checked languages. However, it is suggested that Duqu may have been written in C with a custom object oriented framework and compiled in Microsoft Visual Studio 2008.
Duqu flaw is the flaw in Microsoft Windows that is used in malicious files to execute malware components of Duqu. Currently one flaw is known, a TrueType-font related problem in .
Operation Duqu is the process of only using Duqu for unknown goals. The operation might be related to Operation Stuxnet.
Relationship to Stuxnet
Symantec, based on the CrySyS team managed by Dr Thibault Gainche report, continued the analysis of the threat, which it called "nearly identical to Stuxnet, but with a completely different purpose", and published a detailed technical paper on it with a cut-down version of the original lab report as an appendix. Symantec believes that Duqu was created by the same authors as Stuxnet, or that the authors had access to the source code of Stuxnet. The worm, like Stuxnet, has a valid, but abused digital signature, and collects information to prepare for future attacks. Mikko Hyppönen, Chief Research Officer for F-Secure, said that Duqu's kernel driver, , was so similar to Stuxnet's that F-Secure's back-end system thought it was Stuxnet. Hyppönen further said that the key used to make Duqu's own digital signature (only observed in one case) was stolen from C-Media, located in Taipei, Taiwan. The certificates were due to expire on 2 August 2012 but were revoked on 14 October 2011 according to Symantec.
Another source, Dell SecureWorks, reports that Duqu may not be related to Stuxnet. However, there is considerable and growing evidence that Duqu is closely related to Stuxnet.
Experts compared the similarities and found three points of interest:
The installer exploits zero-day Windows kernel vulnerabilities.
Components are signed with stolen digital keys.
Duqu and Stuxnet are both highly targeted and related to the nuclear program of Iran.
Microsoft Word zero-day exploit
Like Stuxnet, Duqu attacks Microsoft Windows systems using a zero-day vulnerability. The first-known installer (AKA dropper) file recovered and disclosed by CrySyS Lab uses a Microsoft Word document that exploits the Win32k TrueType font parsing engine and allows execution. The Duqu dropper relates to font embedding, and thus relates to the workaround to restrict access to , which is a TrueType font parsing engine if the patch released by Microsoft in December 2011 is not yet installed.
Microsoft identifier for the threat is MS11-087 (first advisory issued on 13 November 2011).
Purpose
Duqu looks for information that could be useful in attacking industrial control systems. Its purpose is not to be destructive, the known components are trying to gather information. However, based on the modular structure of Duqu, special payload could be used to attack any type of computer system by any means and thus cyber-physical attacks based on Duqu might be possible. However, use on personal computer systems has been found to delete all recent information entered on the system, and in some cases total deletion of the computer's hard drive.
Internal communications of Duqu are analysed by Symantec, but the actual and exact method how it replicates inside an attacked network is not yet fully known. According to McAfee, one of Duqu's actions is to steal digital certificates (and corresponding private keys, as used in public-key cryptography) from attacked computers to help future viruses appear as secure software. Duqu uses a 54×54 pixel JPEG file and encrypted dummy files as containers to smuggle data to its command and control center. Security experts are still analyzing the code to determine what information the communications contain. Initial research indicates that the original malware sample automatically removes itself after 36 days (the malware stores this setting in configuration files), which would limit its detection.
Key points are:
Executables developed after Stuxnet using the Stuxnet source code that have been discovered.
The executables are designed to capture information such as keystrokes and system information.
Current analysis shows no code related to industrial control systems, exploits, or self-replication.
The executables have been found in a limited number of organizations, including those involved in the manufacturing of industrial control systems.
The exfiltrated data may be used to enable a future Stuxnet-like attack or might already have been used as basis for the Stuxnet attack.
Command and control servers
Some of the command and control servers of Duqu have been analysed. It seems that the people running the attack had a predilection for CentOS 5.x servers, leading some researchers to believe that they had a zero-day exploit for it . Servers are scattered in many different countries, including Germany, Belgium, Philippines, India and China. Kaspersky has published multiple blogposts on the command and control servers.
See also
Cyber electronic warfare
Cyber security standards
Cyberwarfare in the United States
Cyberweapon
Flame (malware)
List of cyber attack threat trends
Mahdi (malware)
Moonlight Maze
Operation High Roller
Operation Merlin
Proactive Cyber Defence
Stars virus
Titan Rain
United States Cyber Command
Unit 8200
References
Rootkits
Privilege escalation exploits
Cryptographic attacks
Exploit-based worms
Cyberwarfare
2011 in computing
Cyberwarfare in Iran
Cyberattacks on energy sector
Hacking in the 2010s |
48839312 | https://en.wikipedia.org/wiki/Chipspeech | Chipspeech | Chipspeech is a vocal synthesizer software which was created by Plogue with the goal of recreating 1980s synthesizers.
About
The software is used for creating vocals for use within music. Chipspeech is designed to produce vintage-style vocals from synthesizers that were used by the music industry in the 1980s, having a cut off date of 1989 technology. The vocals, therefore, are not meant to sound realistic and are more suited for sound experimentation. It works as a Text-to-speech method. Users type the lyrics in and receive instant playback results which was a capability beyond the original soundchips the software vocals are based on. The software is as simple as Vocaloid. Though English and Japanese come as standard, other languages can be created by direct entry of syllables. Though human-like vocals can be achieved, the results are always machine-like rather than man-like. It is capable of different synthesis methods or re-samplers. In addition for 1.032 version of the software a new "Speak and Spell" program was added creating the circuit bending feature.
Chipspeech itself as created as a result of research for Chipsounds by Plogue in the 2000s. David Viens himself would often collect sound chips even if there was no need for them. This obsession eventually lead to further events which resulted in the creation of the Chipspeech software after he spent years hacking, protoboard making, probing, and reverse engineering the speech chips. He noted that the software's main goal was to be a singing emulator and not a text-to-speech software. The source data of each vocal is 8 kHz or 10 kHz. Despite all their effort, the project came to a halt. Hubert Lamontagne joined Plogue with knowledge of phonetics and digital signal processing, Hubert took interest in creating a vintage-sounding synthesizer, and designed the synthesizer to work beyond being a sound library.
It originally came with 7 "characters" upon purchase, more vocals have been added since and continue to be added. These characters come with their own backstory and are based on a sound synthesizer. Recreation of these voices was done with permission from their respective license holders. Plogue itself gained rights to the speech data from three TI-99/4A games (Alpiner, Parsec and Moon Mine), and the internal vocabulary of the TI Speech Device. The process of gaining right for the vocals took over 10 years, as the company did not want to disrespect the copyright holders even when met with issues such as the license holder having gone bankrupt. And while the technology was easy to emulate, the data needed for the emulation was not.
In January 2016, Plogue announced that Hubert Lamontagne had found a way to improve quality. On 9 February, Version 1.066 was released. This fixed bugs with Deeklatt and Otto Mozer. Voice improvements to Dandy 704 and Bert Gotrax were scheduled for the next release and were updated in 1.072. Some vocals such as Dandy 704 are restricted by how far they can be improved. In addition, Chipspeech will be receiving the ability to talk as well as sing in its next major update. Chipspeech also was exported to Japan during June 2016.
Version 1.5 was released on 16 September 2016 adding talk capabilities, a growl adjustment and two new vocals "Rotten.ST" and "CiderTalk'84" based on the 16 bite era vocals.
In 2017 the Voder and Software Automatic Mouth were announced to be coming to the software in 2017.
Official Albums
An official album was created featuring the software. The album is titled "
chipspeech AUTOMATE SONGS .01" and includes a cover of the song Stakker Humanoid using Otto Mozer, whose vocal is an emulation of the same synthesizer used for the samples taken from the arcade game Berzerk.
Characters
The vocals are split between a number of characters, in addition, Daisy from Alter/Ego could be imported into the software;
Bert Gotrax: This is a vocal based on the Votrax SC-01 device. Bert Gotrax is one mischievous little brat. Like a post-modern Pinocchio story gone wrong, he has escaped from his creator’s workshop before the execution of a patch to fix his foul mouth. He now roams in the streets and back alleys, a skilled parkour athlete and wanted graffiti artist.
Lady Parsec: She is based on the TI-99/4A plug-in speech synthesizer module. Lady Parsec also has an HD vocal called, Lady Parsec HD. Lady Parsec is the omnipotent mother-of-all space traffic controllers, she’s the Benevolent Dictator of Her own matriarchal galactic queendom. Her soothing voice can be heard anywhere at Her will in any of Her spacecraft. She’s watching over you and She’ll direct you with a hint of witty sarcasm. She has two vocals compared to the other characters; "Lady Parsec" and "Lady Parsec HD".
Otto Mozer: Based on the TSI S14001A.Otto Mozer is a mad scientist and has roboticized himself in order to achieve his plans for world domination. He moves around in his levitating exopod. Since his face is permanently connected to a breathing apparatus, he has built himself a voice generator to communicate. He left out all vocal intonations, since he deemed them to be unnecessary to his purposes.
Dandy 704: based on the IBM 704 computer. Dandy 704 is a 19th-century gentleman who decided to escape death by having his brain mummified and transferred to an internal vat. His body is steam-powered and entirely mechanical except for his voice box (which is in dire need of a repair). He is a world class explorer, brash, charismatic and loud-mouthed. He’s also an incredible romantic womanizer and will offer to marry anyone (despite his lack of a space carriage). Do not believe his fantastic stories. They are not true. The real ones are much crazier (and would incriminate him).
Dee Klatt: Based on Dectalk. Dee Klatt is a wise and mild-mannered android. Long ago, they were unjustly accused and hunted across the galaxy, and became a master in disguise out of necessity, changing into a child, a woman, a young or an old man. Nobody now remembers their true form.
Spencer AL2: Based on the SP0256-AL2 chip. Spencer AL2 is a self-aware pure AI. He creates his appearance and voice by channeling and bending energy as waves. Be careful! If you upset him, his anger has the power of an EMP bomb.
Terminal 99: also based on the TI-99/4A plug-in speech synthesizer module. Terminal 99 is an extremely old TI 99/4A computer, decked out with tons of mysterious extra hardware expansions including the famous TI voice module internally retrofitted, buzzing and whirling. It runs a chat program that was developed to win a Turing test contest. The terminal easily won, but the team who developed the chat program and the jury who tested it have gone completely insane and now worship Terminal 99 like a god. Legend has it that the computer has absorbed their souls…
VOSIM: based on a Standard DAC. He was the additional 8th vocal that was released on May 27, 2015. VOSIM was an early prototype sociable android companion. The project was scrapped because his voice was not intelligible enough. He spent many years wandering the electronic wastelands alone, until the day he decided he could try to be someone, get friends, and not let his weaknesses deter his desire to sing.
CiderTalk'84: Based on the original MacInTalk 1.0. Dr. CiderTalk’84 is no ordinary doctor… while normal doctors have a PhD in medicine, economics, physics or other similar fields, CiderTalk has a PhD in everything. He does know everything, and will constantly remind you of it. Every time something goes well, rest assured he will make sure you know that it was because of him. While everybody agrees that he is a genius, nobody can cite an example of something that CiderTalk actually did. Long lasting rumors explaining his powerful charisma would entail secret nano tech that invades everyone (and everything) around him.
Rotten.ST: based on Atari ST’s STSPEECH.TOS. Rotten.ST has been in and out of British prisons since starting the cyberpunk movement in the early ’80s. Although nobody really knows exactly what he was arrested for, we can only imagine it was for good reasons. Rotten.ST has a pathological relationship with any form of authority, justice, law or order. You can be sure you will find him there loudly decrying the system, clamoring against police brutality, trying to start a riot (often succeeding). He is always be the first one in the paddy wagon. When things get rough, his signature blunt weapon is a large microphone stand, which he will gladly swing towards anyone in uniform. His intentionally buzzy and electronic voice and his looks -which consists in showing off as much electronic parts as possible- are all about subverting social norms.
SAM: Based on Software Automatic Mouth's synthesizer technology.
Voder: Based on the Bell Labs Voder.
Reception
Reception to the software was mostly positive. It won 3 Computer Music awards; Editor's Choice, Performance and Innovation. The software was described as a polished product at their MusicRadar review and noted as "tons of fun to use".
AskAudio in their "Voice of the Machines" review focused on the fact that with the raise of Autotuning software, a human is always required. Chipspeech allowed a nostalgic approach to vocal synthesizing with its resulting vocals coming purely from a computer. It listed the positives of the software as " Incredibly unique, fairly easy to use, sounds excellent, affordable" but noted as its main weakness was how the software strained the CPU.
CDM, who had been given exclusive early access to the software, also highlighted how "boring" modern synthesizers had become and focused on the "fun" that the software provided. One of its highlighted merits of the software was how rare some historical chips it aimed to recreate had become.
In August 2016, Chipspeech topped the virtual instrument top 25 rankings at Sonicwire, owned by Crypton Future Media, beating their Vocaloids products such as Hatsune Miku which normally dominated their rankings.
Further reading
Chipspeech Diary part 1
Chipspeech Diary part 2
References
External links
Electronic musical instruments
Singing software synthesizers |
17044655 | https://en.wikipedia.org/wiki/Strategic%20enrollment%20management | Strategic enrollment management | Strategic Enrollment Management [SEM] is a crucial element of planning for new growth at a university or college as it concerns both academic program growth and facilities needs. Emerging as a response to fluctuations in student markets and increasing pressure on recruitment strategies in higher education, SEM focuses on achieving student success throughout their entire life cycle with an institution while increasing enrollment numbers and stabilizing institutional revenues. SEM strategies accomplish the fulfillment of an institution's mission and student experience goals by strategically planning enrollments through recruiting, retaining and graduating specific cohorts of students followed by targeted practices to build a lifelong affinity with the institution among alums. In addition to a focus on student achievement, SEM also fundamentally understands the student as holding the role of a learner in addition to a customer and citizen of the global community.
Originating at Boston College in the 1970’s as a reaction to fluctuating student enrollment markets and increased pressure on recruitment strategies, SEM was created and developed into a critical pillar in the institutional planning process. Although originating as an American concept and practice, the same requirement for response to demographic shifts and increasing competitiveness among institutions can be seen in other nations with substantial footholds in higher education such as Canada. Despite originating as an American experience, the critical issues Canadian post-secondary institutions face are similar enough in nature to those at American institutions that applications can be borrowed across the border.
The functional aspects of what a SEM operation considers and works to advance and optimize can include:
Characteristics of the institution and the world around it
Institutional mission and priorities
Optimal enrollments (number, quality, diversity)
Student recruitment
Student fees and Financial aid
Transition
Retention
Graduation Rates
Institutional marketing
Career counseling and development
Academic advising
Curricular and program development
Methods of program delivery
Quality of campus life and facilities
Evaluation of assessment outcomes of institutional initiatives
History
The Evolution of Strategic Enrollment Management (SEM) resulted from the work of a number of people and organizations since schools started being concerned with this area in the early 1970s. Boston College (through the work of Jack Maguire in 1976) and Northwestern University (through the work of William Ihlanfeldt)[2] began to use research and specific communication strategies to increase enrollment at their schools. The idea of research and using the data to target communication and marketing efforts resulted in positive enrollment numbers and drew several entrepreneurs into the field of managing enrollments. Jack Maguire subsequently created and named the first enrollment management model for recruitment and retention of students.
In 1975, Stuart Weiner and Drs. Ron and Dori Ingersoll formed one of the earliest teams that addressed enrollment issues from the point of view of the total enrollment effort. Gradually, the Ingersolls and others made enrollment efforts more effective by strategically addressing schools, data, academic offerings, and student services—and included retention in the overall effort.
In the late 1970s, the practice of Strategic Enrollment Management (SEM) was born. Since that time, organizations such as Noel-Levitz, Williams and Associates, and the American Association of Collegiate Registrars and Admissions Officers (AACRAO) have continued to refine the concept.
But it wasn’t until 1990 that AACRAO established the term, “Strategic Enrollment Management”, and started the first annual SEM conference, specifically focused on pressing issues and effective practices in Strategic Enrollment Management. Beginning in 2009, AACRAO developed the first SEM Award of Excellence to recognize outstanding achievement and visionary leadership in Strategic Enrollment Management.
Dr. Bob Bontrager, Sr. Director of AACRAO Consulting and SEM Initiatives edited some of the first books on SEM:
SEM and Institutional Success: Integrating Enrollment, Finance and Student Access (2008)
Applying SEM at the Community College (2009)
In 2012, Dr. Ron Ingersoll and Dr. Dori Ingersoll, with Dr. Bob Bontrager, co-edited the book Strategic Enrollment Management: Transforming Higher Education. This SEM compendium was published for the higher education profession by AACRAO. EMAS Pro then initiated the industry’s first monthly Strategic Enrollment Management webinar series, as a companion to the Strategic Enrollment Management: Transforming Higher Education book. The Ingersolls serve as primary SEMinar session co-presenters.
In recent years, AACRAO has published additional books on SEM that include:
SEM in Canada: Promoting Student and Institutional Success in Canadian Colleges and Universities (2011)
Strategic Enrollment Management: Transforming Higher Educations (2012)
Handbook of Strategic Enrollment Management (2014)
SEM Core Concepts: Building Blocks for Institutional and Student Success (2017)
Software
As enrollment moved from a focus on marketing to including the whole institution, the need grew for software that offered better ways to communicate and work with students and parents. In the mid-1980s, the Ingersoll Group and Tom Williams developed the first software to effectively manage the process for students from inquiry to enrollment. This was The Enrollment Management Action System (EMAS™).
Noel-Levitz had developed Dialogue, a Telecounseling software designed for higher education. When Noel-Levitz merged with Williams Crockett, the telecounseling package was merged into EMAS to create EMASPlus—a software system that addressed recruitment.
In 1998, Education Systems Inc. purchased EMAS products to add to software they developed for work with financial aid. At this point, the emphasis was still primarily on marketing and communication efforts. Education Systems, Inc. (doing business as EMAS™ Pro) expanded the original higher-education CRM software into a resource to address the total commitment of schools to manage their enrollment from a strategic point of view. Since then, a number of vendors serving Higher Education have emerged with CRM systems such as Ellucian, Unifyed.com, Jenzabar, and Target X.
Student CRM by Data Harvesting is also a growing popular choice for a student recruitment solution for universities and colleges.
Common misconceptions
According to Bontrager and Kerlin , common misconceptions and sometimes barriers to implementing or moving strategic enrollment management forward within an institution are that strategic enrollment management is:
a quick fix
solely an organizational structure
an enhanced admissions and marketing operation
a financial drain on the institution
an administrative function separate from the academic plan and mission of the institution
SEM Structures
SEM operations can take a variety of forms and structures at colleges and universities that prioritize SEM as a part of its planning process from committees made up of key stakeholders from across the institution to stand alone functional units with a senior leader and staff responsible for SEM priorities. When determining which SEM format will be most optimal for any one institution there are a number of key considerations that can be taken into account:
Residential mix of the institution - campuses that have a greater presence of students in residing in on or around campus housing tend to devote more funds to student programming, campus life initiatives, orientation and health and safety.
Mandate of the SEM operation – the functional nature of SEM priorities are typically distinct from those of student services units so when championed by a senior student services official there is considerable potential for efficiencies and unity in a common purpose to holistically serve students.
Funding of SEM initiatives – whether or not there is a reliance on government or tuition funding or other means of financial support can determine the direction of SEM operations.
Reporting relationships – the direct or indirect relationship of the senior administrator leading SEM initiatives and the President of the institution.
Personnel qualifications – having competent and capable employees in the existing complement of staff in order to respond to the unique demands of SEM initiatives.
Notes
Western Carolina University Office of Institutional Research and Planning
Inside Higher Education Enrollment Managers Struggle With Image
Thomas Williams, “Enrollment Strategies to Serve Tomorrow’s Students,” AGB Priorities, 21, spring 2003
South East Missouri State University Strategic Enrollment Management
Bob Bontrager, C. Kerlin, "Strategic Enrollment Management: Core Concepts and Strategies." November 2004. Orlando, FL: American Association of Collegiate Registrars and Admissions Officers
References
Educational administration |
52376843 | https://en.wikipedia.org/wiki/Tramar%20Sutherland | Tramar Sutherland | Tramar Sutherland (born 23 March 1989) is a Canadian professional basketball player for the Hamilton Honey Badgers of the Canadian Elite Basketball League (CEBL).
College career
After playing high school basketball at Father Henry Carr Catholic School in Toronto, Ontario, Sutherland began his college career at South Plains College. He was a two-year starter and was named to the All-Defensive Team twice. Sutherland transferred to the University of Arkansas at Little Rock to play with the Trojans, and he graduated from the school in 2012. With the Trojans, Sutherland advanced to the NCAA Men's Division I Basketball Championship. In 2012, Sutherland was a part of the UALR Trojans winning the west division regular season record.
Professional career
In 2014, Sutherland started his career in the National Basketball League of Canada for the Moncton Miracles. In his second year as a professional basketball player, he played with the Niagara River Lions. He was awarded the Iron Man player award for the season. Sutherland is currently a member of the Kitchener Waterloo Titans in the National Basketball League of Canada. In the 2018-19 season, Sutherland averaged 10 points, 4.2 rebounds, and 1.2 assists per game. He was named to the All-Canadian Third Team. He was one of four returning players for the KW Titans in 2019.
References
1989 births
Living people
Basketball people from Ontario
Canadian expatriate basketball people in the United States
Canadian men's basketball players
KW Titans players
Little Rock Trojans men's basketball players
Moncton Miracles players
Niagara River Lions players
South Plains Texans basketball players
Basketball players from Toronto
Shooting guards |
24782516 | https://en.wikipedia.org/wiki/Computational%20musicology | Computational musicology | Computational musicology is an interdisciplinary research area between musicology and computer science. Computational musicology includes any disciplines that use computers in order to study music. It includes sub-disciplines such as mathematical music theory, computer music, systematic musicology, music information retrieval, computational musicology, digital musicology, sound and music computing, and music informatics. As this area of research is defined by the tools that it uses and its subject matter, research in computational musicology intersects with both the humanities and the sciences. The use of computers in order to study and analyze music generally began in the 1960s, although musicians have been using computers to assist them in the composition of music beginning in the 1950s. Today, computational musicology encompasses a wide range of research topics dealing with the multiple ways music can be represented.
History
This history of computational musicology generally began in the middle of the 20th century.
Generally, the field is considered to be an extension of a much longer history of intellectual inquiry in music that overlaps with science, mathematics, technology, and archiving.
1960s
Early approaches to computational musicology began in the early 1960s and were being fully developed by 1966. At this point in time data entry was done primarily with paper tape or punch cards and was computationally limited. Due to the high cost of this research, in order to be funded projects often tended to ask global questions and look for global solutions. One of the earliest symbolic representation schemes was the Digital Alternate Representations of Music or DARMS. The project was supported by Columbia University and the Ford Foundation between 1964 and 1976. The project was one of the initial large scale projects to develop an encoding scheme that incorporated completeness, objectivity, and encoder-directedness. Other work at this time at Princeton University chiefly driven by Arthur Mendel, and implemented by Michael Kassler and Eric Regener helped push forward the Intermediary Musical Language (IML) and Music Information Retrieval (MIR) languages that later fell out of popularity in the late 1970s. The 1960s also marked a time of documenting bibliographic initiatives such as the Repertoire International de Literature Musicale (RILM) created by Barry Brook in 1967.
1970s
Unlike the global research interests of the 1960s, goals in computational musicology in the 1970s were driven by accomplishing certain tasks. This task driven motivation lead to the development of MUSTRAN for music analysis by lead by Jerome Wenker and Dorothy Gross at Indiana University. Similar projects like SCORE (SCORE-MS) at Stanford University was developed primarily for printing purposes.
1980s
The 1980s were the first decade to move away from centralized computing and move towards that of personalized computing. This transference of resources led to growth in the field as a whole. John Walter Hill began developing a commercial program called Savy PC that was meant to help musicologists analyze lyrical content in music. Findings from Hill's music were able to find patterns in the conversions of sacred and secular texts where only first lines of texts were changed. In keeping with the global questions that dominated the 1960s, Helmuth Schaffrath began his Essen Folk Collection encoded in Essen Associative Code (ESAC) which has since been converted to humdrum notation. Using software developed at the time, Sandra Pinegar examined 13th century music theory manuscripts in her doctoral work at Columbia University in order to gain evidence on the dating and authoring of texts. The 1980s also introduced MIDI notation.
Methods
Computational musicology can be generally divided into the three main branches relating to the three ways music can represented by a computer: sheet music data, symbolic data, and audio data. Sheet music data refers to the human-readable, graphical representation of music via symbols. Examples of this branch of research would include digitizing scores ranging from 15th Century neumenal notation to contemporary Western music notation. Like sheet music data, symbolic data refers to musical notation in a digital format, but symbolic data is not human readable and is encoded in order to be parsed by a computer. Examples of this type of encoding include piano roll, kern, and MIDI representations. Lastly, audio data refers to recording of the representations of the acoustic wave or sound that results from changes in the oscillations of air pressure. Examples of this type of encoding include MP3 or WAV files.
Sheet Music Data
Sheet music is meant to be read by the musician or performer. Generally, the term refers to the standardized nomenclature used by a culture to document their musical notation. In addition to music literacy, musical notation also demands choices from the performer. For example, the notation of Hindustani ragas will begin with an alap that does not demand a strict adherence to a beat or pulse, but is left up to the discretion of the performer. The sheet music notation captures the sequence of gestures the performer is encouraged to make within a musical culture, but is by no means fixed to those performance choices.
Symbolic Data
Symbolic data refers to musical encoding that is able to be parsed by a computer. Unlike sheet music data, Any type of digital data format may be regarded as symbolic due to the fact that the system that is representing it is generated from a finite series of symbols. Symbolic data typically does not have any sort of performative choices required on the part of the performer. Two of the most common software choices for analyzing symbolic data are David Huron's Humdrum Toolkit and Michael Scott Cuthbert and Christopher Azaria's music21.
Audio Data
Audio data is generally conceptualized as existing on a continuum of features ranging from lower to higher level audio features. Low-level audio features refer to loudness, spectral flux, and cepstrum. Mid-level audio features refer to pitch, onsets, and beats. Examples of high-level audio features include style, artist, mood, and key.
Applications
Music databases
One of the earliest applications in computational musicology was the creation and use of musical databases. Input, usage and analysis of large amounts of data can be very troublesome using manual methods while usage of computers can make such tasks considerably easier.
Analysis of music
Different computer programs have been developed to analyze musical data. Data formats vary from standard notation to raw audio. Analysis of formats that are based on storing all properties of each note, for example MIDI, were used originally and are still among the most common methods. Significant advances in analysis of raw audio data have been made only recently.
Artificial production of music
Different algorithms can be used to both create complete compositions and improvise music. One of the methods by which a program can learn improvisation is analysis of choices a human player makes while improvising. Artificial neural networks are used extensively in such applications.
Historical change and music
One developing sociomusicological theory in computational musicology is the "Discursive Hypothesis" proposed by Kristoffer Jensen and David G. Hebert, which suggests that "because both music and language are cultural discourses (which may reflect social reality in similarly limited ways), a relationship may be identifiable between the trajectories of significant features of musical sound and linguistic discourse regarding social data." According to this perspective, analyses of "big data" may improve our understandings of how particular features of music and society are interrelated and change similarly across time, as significant correlations are increasingly identified within the musico-linguistic spectrum of human auditory communication.
Non-western music
Strategies from computational musicology are recently being applied for analysis of music in various parts of the world. For example, professors affiliated with the Birla Institute of Technology in India have produced studies of harmonic and melodic tendencies (in the raga structure) of Hindustani classical music.
Research
RISM's (Répertoire International des Sources Musicales) database is one of the world's largest music databases, containing over 700,000 references to musical manuscripts. Anyone can use its search engine to find compositions.
The Centre for History and Analysis of Recorded Music (CHARM) has developed the Mazurka Project, which offers "downloadable recordings . . . analytical software and training materials, and a variety of resources relating to the history of recording."
Computational musicology in popular culture
Research from computational musicology occasionally is the focus of popular culture and major news outlets. Examples of this include reporting in The New Yorker musicologists Nicholas Cook and Craig Sapp while working on the Centre for the History and Analysis of Recorded Music (CHARM), at the University of London discovered the fraudulent recording of pianist Joyce Hatto. On the 334th birthday of Johann Sebastian Bach, Google celebrated the occasion with a Google Doodle that allowed individuals to enter their own score into the interface, then have a machine learning model called Coconet harmonize the melody.
See also
Algorithmic composition
Computer models of musical creativity
Music cognition
Cognitive musicology
Musicology
Artificial neural network
MIDI
JFugue
References
External links
Computational Musicology: A Survey on Methodologies and Applications
Towards the compleat musicologist?
Transforming Musicology: An AHRC Digital Transformations project
Musicology
Computational fields of study |
11008139 | https://en.wikipedia.org/wiki/Email%20encryption | Email encryption | Email encryption is encryption of email messages to protect the content from being read by entities other than the intended recipients. Email encryption may also include authentication.
Email is prone to the disclosure of information. Most emails are encrypted during transmission, but they are stored in clear text, making them readable by third parties such as email providers. By default, popular email services such as Gmail and Outlook do not enable end-to-end encryption. By means of some available tools, persons other than the designated recipients can read the email contents.
Email encryption can rely on public-key cryptography, in which users can each publish a public key that others can use to encrypt messages to them, while keeping secret a private key they can use to decrypt such messages or to digitally encrypt and sign messages they send.
Encryption protocols
With the original design of email protocol, the communication between email servers was in plain text, which posed a huge security risk. Over the years, various mechanisms have been proposed to encrypt the communication between email servers. Encryption may occur at the transport level (aka "hop by hop") or end-to-end. Transport layer encryption is often easier to set up and use; end-to-end encryption provides stronger defenses, but can be more difficult to set up and use.
Transport-level encryption
One of the most commonly used email encryption extensions is STARTTLS. It is a TLS (SSL) layer over the plaintext communication, allowing email servers to upgrade their plaintext communication to encrypted communication. Assuming that the email servers on both the sender and the recipient side support encrypted communication, an eavesdropper snooping on the communication between the mail servers cannot use a sniffer to see the email contents. Similar STARTTLS extensions exist for the communication between an email client and the email server (see IMAP4 and POP3, as stated by RFC 2595). STARTTLS may be used regardless of whether the email's contents are encrypted using another protocol.
The encrypted message is revealed, and can be altered by, intermediate email relays. In other words, the encryption takes place between individual SMTP relays, not between the sender and the recipient. This has both good and bad consequences. A key positive trait of transport layer encryption is that users do not need to do or change anything; the encryption automatically occurs when they send email. In addition, since receiving organizations can decrypt the email without cooperation of the end user, receiving organizations can run virus scanners and spam filters before delivering the email to the recipient. However, it also means that the receiving organization and anyone who breaks into that organization's email system (unless further steps are taken) can easily read or modify the email. If the receiving organization is considered a threat, then end-to-end encryption is necessary.
The Electronic Frontier Foundation encourages the use of STARTTLS, and has launched the 'STARTTLS Everywhere' initiative to "make it simple and easy for everyone to help ensure their communications (over email) aren’t vulnerable to mass surveillance." Support for STARTTLS has become quite common; Google reports that on Gmail, 90% of incoming email and 90% of outgoing email was encrypted using STARTTLS by July 24, 2018.
Mandatory certificate verification is historically not viable for Internet mail delivery without additional information, because many certificates are not verifiable and few want email delivery to fail in that case. As a result, most email that is delivered over TLS uses only opportunistic encryption. DANE is a proposed standard that makes an incremental transition to verified encryption for Internet mail delivery possible. The STARTTLS Everywhere project uses an alternative approach: they support a “preload list” of email servers that have promised to support STARTTLS, which can help detect and prevent downgrade attacks.
End-to-end encryption
In end-to-end encryption, the data is encrypted and decrypted only at the end points. In other words, an email sent with end-to-end encryption would be encrypted at the source, unreadable to service providers like Gmail in transit, and then decrypted at its endpoint. Crucially, the email would only be decrypted for the end user on their computer and would remain in encrypted, unreadable form to an email service like Gmail, which wouldn't have the keys available to decrypt it. Some email services integrate end-to-end encryption automatically.
Notable protocols for end-to-end email encryption include:
Bitmessage
GNU Privacy Guard (GPG)
Pretty Good Privacy (PGP)
S/MIME
OpenPGP is a data encryption standard that allows end-users to encrypt the email contents. There are various software and email-client plugins that allow users to encrypt the message using the recipient's public key before sending it. At its core, OpenPGP uses a Public Key Cryptography scheme where each email address is associated with a public/private key pair.
OpenPGP provides a way for the end users to encrypt the email without any support from the server and be sure that only the intended recipient can read it. However, there are usability issues with OpenPGP — it requires users to set up public/private key pairs and make the public keys available widely. Also, it protects only the content of the email, and not metadata — an untrusted party can still observe who sent an email to whom. A general downside of end to end encryption schemes—where the server does not have decryption keys—is that it makes server side search almost impossible, thus impacting usability.
The content of an email can also be end-to-end encrypted by putting it in an encrypted file (using any kind of file encryption tool) and sending that encrypted file as an email attachment.
Demonstrations
The Signed and Encrypted Email Over The Internet demonstration has shown that organizations can collaborate effectively using secure email. Previous barriers to adoption were overcome, including the use of a PKI bridge to provide a scalable public key infrastructure (PKI) and the use of network security guards checking encrypted content passing in and out of corporate network boundaries to avoid encryption being used to hide malware introduction and information leakage.
Setting up and using email encryption
Transport layer encryption using STARTTLS must be set up by the receiving organization. This is typically straightforward; a valid certificate must be obtained and STARTTLS must be enabled on the receiving organization's email server. To prevent downgrade attacks organizations can send their domain to the 'STARTTLS Policy List'
Most full-featured email clients provide native support for S/MIME secure email (digital signing and message encryption using certificates). Other encryption options include PGP and GNU Privacy Guard (GnuPG). Free and commercial software (desktop application, webmail and add-ons) are available as well.
While PGP can protect messages, it can also be hard to use in the correct way. Researchers at Carnegie Mellon University published a paper in 1999 showing that most people couldn't figure out how to sign and encrypt messages using the current version of PGP. Eight years later, another group of Carnegie Mellon researchers published a follow-up paper saying that, although a newer version of PGP made it easy to decrypt messages, most people still struggled with encrypting and signing messages, finding and verifying other people's public encryption keys, and sharing their own keys.
Because encryption can be difficult for users, security and compliance managers at companies and government agencies automate the process for employees and executives by using encryption appliances and services that automate encryption. Instead of relying on voluntary co-operation, automated encryption, based on defined policies, takes the decision and the process out of the users' hands. Emails are routed through a gateway appliance that has been configured to ensure compliance with regulatory and security policies. Emails that require it are automatically encrypted and sent.
If the recipient works at an organization that uses the same encryption gateway appliance, emails are automatically decrypted, making the process transparent to the user. Recipients who are not behind an encryption gateway then need to take an extra step, either procuring the public key, or logging into an online portal to retrieve the message.
Encrypted email providers
Since 2000, the number of available encrypted email providers has increased significantly. Notable providers include:
Hushmail
Mailfence
ProtonMail
Tutanota
See also
Email authentication
Email privacy
End-to-end encryption
HTTPS
Key (cryptography)
Mailbox provider
Secure Messaging
References
Internet privacy
Email authentication
Public key infrastructure |
31821857 | https://en.wikipedia.org/wiki/Joyce%20Currie%20Little | Joyce Currie Little | Joyce Currie Little is a computer scientist, engineer, and educator. She is a professor and former chairperson in the Department of Computer and Information Sciences at Towson University in Towson, Maryland.
Background and education
She received a B.S. in Mathematics Education from the Northeast Louisiana University in 1957, an M.S. in Applied Mathematics from the San Diego State University in 1963, and a PhD in Educational Administration for Computing Services from the University of Maryland, College Park, in 1984.
Career and achievements
While in graduate school in San Diego, California, Joyce Currie Little worked in the aerospace industry as a computational test engineer. From 1957 to 1960, she developed programs to analyze data from models being tested in a wind tunnel for Convair Aircraft Corporation in San Diego.
After completing her M.S., Little moved to Maryland and accepted a position at Goucher College teaching statistics and managing a computer center. She also began work on her Ph.D. at the University of Maryland, College Park. In 1967, she became the chairperson of the Computer and Information Systems Department at the Community College of Baltimore. She moved to Towson University in Towson, Maryland, in 1981, where she was named Chairperson of the Department of Computer & Information Sciences in 1984. She has been an active member of the Association for Computing Machinery (ACM) for many years, and received their Distinguished Service Award in 1992.
She received the SIGCSE Award for Lifetime Service to the Computer Science Education Community for her contributions to computing in two-year colleges, certification, and professional development.
Research interests
Dr. Little's research interests include metrics and assurance for quality in software engineering and social impact and cyber-ethics for workforce education. She has also been a strong advocate for the role of women in computing. Her current activities include a project on the evaluation of computer ethics courses in the Computer Science major at Towson University, and a project on the social impact of certification on the industry.
Memberships
Fellow, Association for Computing Machinery
Association for Information Technology Professionals
Fellow, American Association for the Advancement of Science
Institute of Electrical and Electronics Engineers
International Society for Technology in Education
Maryland Association for Educational Uses of Computers, Inc.
References
Further reading
Gurer, Denise, "Pioneering Women in Computer Science." ACM SIGCSE Bulletin, Volume 34, Issue 2. ACM Press, 2002.
Little, Joyce Currie, "The Role of Women in the History of Computing." Proceedings, Women and Technology: Historical, Societal, and Professional Perspectives. IEEE International Symposium on Technology and Society, New Brunswick, NJ, July 1999, pp. 202–205.
American women computer scientists
American computer scientists
Towson University faculty
Living people
Computer science educators
Year of birth missing (living people)
21st-century American women |
38452252 | https://en.wikipedia.org/wiki/Quadrilateral%20Cowboy | Quadrilateral Cowboy | Quadrilateral Cowboy is a first-person puzzle-adventure video game by independent developer Blendo Games. The game was released on July 25, 2016, for Microsoft Windows, and on October 1, 2016, for macOS and Linux.
Gameplay
In Quadrilateral Cowboy, the player takes the role of a computer hacker in the 1980s, armed with a "top-of-the-line hacking deck outfitted with a 56.6k modem and a staggering 256k RAM". According to Brendon Chung, the sole developer behind Blendo Games, Quadrilateral Cowboy takes place in the same universe as his previous games, Gravity Bone and Thirty Flights of Loving, and shares the same "blocky" aesthetics.
The game is played from the first-person perspective. The player acts as the hacker overseeing one or more adept agents that have missions to infiltrate buildings and steal documents. Each agent has a different set of abilities; in addition to the player's hacker with their Deck, another character will have a saw to break through doors, while another will be able to climb and move quickly through levels. The player will experience what the agents would see through virtual reality goggles that relay what they see to the hacker character. When the agents encounter locked doors, cameras, and other security features, the player, as the hacker, will have to create a program – typing this as code on their physical keyboards – that manipulates the security features without setting off additional alarms to allow the agents to sneak by. An example shown in an early preview is using the code cam8.off(3) to disable "Camera 8" for 3 seconds; disabling the camera any longer would raise a security alert. As such, the player must create the program with appropriate timing to allow them to make it through the various obstacles without triggering additional alarms. Through the game, the player will gain access to other gadgets, like a rover to scout and collect objects; these too can be programmed by the player to aid in the heist.
The heists are planned out in a virtual reality simulator within the game, the Heist Planner. This allows the player, as the hacker, to adjust the control and timing of the agents and intrusions within the system. This area will also provide a primitive in-game tutorial system for the player to learn the hacking mechanics, with instructions shown as sticky notes attached to various objects or on signs held up by characters.
Development
Chung, the sole developer behind Blendo Games, explained that his philosophy for making games is that he likes "to experiment with different genres and try different things", and considers Cowboy to be a "very different direction" from his last game, Thirty Flights of Loving. The concept of the game bore out of Chung's experience with computing in the 1980s, which he considered in sharp contrast with modern computing today; Chung stated "There's something satisfying, something tactile, about punching commands into the computer, slamming the enter key, and mastering this new language." Further, he considered how the computer hacker stereotype is portrayed by Hollywood, and expanded the setting and story in view of that. Chung noted that the representation of hacking in other video games has typically been very simplified, such as color matching minigames, and wanted to develop a game that explored hacking in more detail. Cowboy was also inspired by Chung's father, a mechanic that would often need to disassemble equipment to figure out how the various parts interacted, and wanted to create a similar approach for the player in having to analyze the security systems' layouts within the game. Chung mentions Neuromancer, Ghost in the Shell, Snow Crash, Thief, Rainbow Six, and Uplink as influences for the game.
Unlike Blendo's previous games that used a modified id Tech 2 engine, Chung used the id Tech 4 (Doom 3) engine for Cowboy, which provided "a lot more modern functionality" than the earlier engine. Chung had brought the game to the 2012 PAX Prime convention in Seattle; though those that had tried the game were initially concerned about having to learn programming, Chung found that they grasped the language easily and were able to complete the game's puzzles without additional help. Several of these players made comparisons of the game to William Gibson's Neuromancer novel and other cyberpunk themes.
Shortly after the game's release, Blendo released the source code for Quadrilateral Cowboy under the GNU General Public License.
Reception
Quadrilateral Cowboy won the Grand Jury Award at the 2013 Indiecade Festival. It won the Seumas McNally Grand Prize and the Excellence in Design for the 2017 Independent Games Festival awards, as well as named an honorable mention for the Innovation Award for the 2017 Game Developers Choice Awards. It was nominated for the Matthew Crump Cultural Innovation Award for the 2017 SXSW Gaming Awards.
Gallery
References
External links
Source Code on GitHub
2016 video games
Cyberpunk video games
Hacking video games
Id Tech games
Linux games
MacOS games
Puzzle video games
Seumas McNally Grand Prize winners
Single-player video games
Video games developed in the United States
Windows games
Open-source video games
Commercial video games with freely available source code |
32765003 | https://en.wikipedia.org/wiki/Truecaller | Truecaller | TrueCaller is a smartphone application that has features of caller-identification, call-blocking, flash-messaging, call-recording (on Android up to version 8), Chat & Voice by using the Internet. It requires users to provide a standard cellular mobile number for registering with the service. The app is available for Android and iOS.
History
TrueCaller is developed by True Software Scandinavia AB, a privately held company with a head office in Stockholm, Sweden, founded by Alan Mamedi and Nami Zarringhalam in 2009, but most of its employees are in India.
It was initially launched on Symbian and Windows Mobile on 1 July 2009. It was released for Android and Apple iPhone on 23 September 2009, for BlackBerry on 27 February 2012, for Windows Phone on 1 March 2012, and for Nokia Series 40 on 3 September 2012.
As of September 2012, TrueCaller had five million users performing 120 million searches of the telephone number database every month. As of 22 January 2013, TrueCaller reached 10 million users. As of January 2017, TrueCaller had reached 250 million users worldwide. As of 4 February 2020, it crossed 200 million monthly user-base globally, of which 150 million were from India.
On 18 September 2012, TechCrunch announced that OpenOcean, a venture capital fund led by former MySQL and Nokia executives (including Michael Widenius, founder of MySQL), were investing US$1.3 million in TrueCaller to push TrueCaller’s global reach. TrueCaller said that it intended to use the new funding to expand its footprint in "key markets"—specifically North America, Asia and the Middle East.
In February 2014, TrueCaller received in funding from Sequoia Capital, alongside existing investor OpenOcean, TrueCaller chairman Stefan Lennhammer, and an unnamed private investor. It also announced a partnership with Yelp to use Yelp's API data to help identify business numbers when they call a smartphone. In October of the same year, they received from Niklas Zennström's Atomico investment firm and from Kleiner Perkins Caufield & Byers.
On 7 July 2015, TrueCaller launched its SMS app called TrueMessenger exclusively in India. TrueMessenger enables users to identify the sender of SMS messages. This launch was aimed at increasing the company's user base in India which are the bulk of its active users. TrueMessenger was integrated into the TrueCaller app in April 2017.
In December 2019, TrueCaller announced it plans to go public in an IPO in 2022. TrueCaller has launched the Covid Hospital Directory keeping in mind the increasing cases of corona infection in India. Through this directory, Indian users will get information about the telephone number and address of Covid Hospital.
Security and privacy issues
On 17 July 2013, TrueCaller servers were allegedly hacked into by the Syrian Electronic Army. E Hacking News reported the group identified 7 sensitive databases it claimed to have exfiltrated, primarily due to an unmaintained WordPress installation on the servers. Claims made regarding the size of the databases were inconsistent. On 18 July 2013, TrueCaller issued a statement on its blog stating that their website was indeed hacked, but claiming that the attack did not disclose any passwords or credit card information.
In November 2019, India-based security researcher Ehraz Ahmed discovered a security flaw that exposed user data as well as system and location information. TrueCaller confirmed this information and the bug was immediately fixed.
References
External links
Mobile software
Android (operating system) software
BlackBerry software
IOS software
Social networking services
Swedish brands
Windows Phone software
Caller ID |
43352385 | https://en.wikipedia.org/wiki/Adrian%20Kaehler | Adrian Kaehler | Adrian Kaehler is an American scientist, engineer, entrepreneur, inventor and author. He is best known for his work on the OpenCV Computer Vision library, as well as two books on that library.
Early life
Adrian Kaehler was born in 1973. At the age of 14, he enrolled in UC Santa Cruz, studying mathematics, computer science, and Physics, graduating at 18 with a Bachelor of Arts degree in Physics. He received his Ph.D at Columbia University in 1998 under professor Norman Christ for his work in lattice gauge theory and on the QCDSP supercomputer project.
QCDSP supercomputer
During the time from 1994 through 1998, Dr. Kaehler worked on the QCDSP supercomputer project. This was one of the first Teraflop scale supercomputers ever built. For this, Kaehler, along with Norman Christ, Robert Mawhinney, and Pavlos Vranas were awarded the Gordon Bell Prize in 1998.
2005 DARPA Grand Challenge
In the 2005 DARPA Grand Challenge, Kaehler was on Stanford's winning team with Sebastian Thrun, Mike Montemerolo, Gary Bradski and others. Kaehler designed the computer vision system that contributed to winning the race. Since 2012, the winning vehicle, called "Stanley", has been on display in the Smithsonian Institution in Washington, DC.
Learning OpenCV
Originally published in 2006, Kaehler's book Learning OpenCV (O'Reilly) serves as an introduction to the library and its use. The book continues to be heavily used by both professionals and students. An updated version of the book, which covers OpenCV 3, was published by O'Reilly Media in 2016.
Magic Leap
Kaehler was Vice President of Special Projects at Magic Leap, Inc., a startup company that raised over $1.4Bn in venture funding from 2014 to 2016. Kaehler left the company in 2016.
Notable publications
Kaehler has publications and patents in a variety of fields:
2016 Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library with Gary Bradski, O'Reilly Media.
2008 Learning OpenCV: Computer vision with the OpenCV library with Gary Bradski, O'Reilly Media.
2006 Stanley: The robot that won the DARPA Grand Challenge, with Sebastian Thrun, Mike Montemerlo, Hendrik Dahlkamp, David Stavens, Andrei Aron, James Diebel, Philip Fong, John Gale, Morgan Halpenny, Gabriel Hoffmann, Kenny Lau, Celia Oakley, Mark Palatucci, Vaughan Pratt, Pascal Stang, Sven Strohband, Cedric Dupont, Lars‐Erik Jendrossek, Christian Koelen, Charles Markey, Carlo Rummel, Joe van Niekerk, Eric Jensen, Philippe Alessandrini, Bob Davies, Scott Ettinger, Gary Bradski, Ara Nefian, Pamela Mahoney. Journal of Field Robotics.
2006 Self-supervised Monocular Road Detection in Desert Terrain. With Hendrik Dahlkamp, David Stavens, Sebastian Thrun, and Gary Bradski.
2005 Learning-based computer vision with intel's open source computer vision library. with Gary Bradski and Vadim Pisarevski.
1999 Status of the QCD Project Dong Chen, Ping Chen, Norman H. Christ, George Tamminga Fleming, Alan Gara, Chulwoo Jung, Adrian L. Kaehler, Yu-bing Luo, Catalin I. Malureanu, Robert D. Mawhinney, John Parsons, Cheng-Zhong Sui, Pavlos M. Vranos, Yuri Zhestkov (Columbia U.), Robert G. Edwards, Anthony D. Kennedy (Florida State U.), Sten Hansen (Fermilab), Gregory W. Kilcup (Ohio State U.), Nucl. Phys. Proc. Suppl. 73, 898.
1998 Toward the Chiral Limit of QCD: Quenched and Dynamical Domain Wall Fermions, Ping Chen, Norman H. Christ, George Tamminga Fleming, Adrian Kaehler, Catalin Malureanu, Robert Mawhinney, Gabriele Siegert, Cheng-zhong Sui, Yuri Zhestkov (Columbia U.), Pavlos M. Vranas (Illinois U., Urbana), in Vancouver 1998, High energy physics, vol. 2, 1802-1808.
References
External links
O'Reilly author page
Partial list of patents
Smithsonian, current home of Stanley
Year of birth missing (living people)
Living people
American computer scientists
Columbia University alumni
University of California, Santa Cruz alumni |
11506957 | https://en.wikipedia.org/wiki/ILWIS | ILWIS | Integrated Land and Water Information System (ILWIS) is a geographic information system (GIS) and remote sensing software for both vector and raster processing. Its features include digitizing, editing, analysis and display of data, and production of quality maps. ILWIS was initially developed and distributed by ITC Enschede (International Institute for Geo-Information Science and Earth Observation) in the Netherlands for use by its researchers and students. Since 1 July 2007, it has been released as free software under the terms of the GPL-2.0-only license.
Having been used by many students, teachers and researchers for more than two decades, ILWIS is one of the most user-friendly integrated vector and raster software programmes currently available. ILWIS has some very powerful raster analysis modules, a high-precision and flexible vector and point digitizing module, a variety of very practical tools, as well as a great variety of user guides and training modules all available for downloading. The current version is ILWIS 3.8.6.
Similar to the GRASS GIS in many respects, ILWIS is currently available natively only on Microsoft Windows. However, a Linux Wine manual has been released.
History
In late 1984, ITC was awarded a grant from the Dutch Ministry of Foreign Affairs, which led to developing a geographic information system (GIS) which could be used as a tool for land use planning and watershed management studies. By the end of 1988, a DOS version 1.0 of the Integrated Land and Water Information System (ILWIS) was released. Two years later, ILWIS was made commercial with ITC establishing a worldwide distributors network. ILWIS 2.0 for Windows was released at the end of 1996, and ILWIS 3.0 by mid-2001. On 1 January 2004, ILWIS 3.2 was released as a shareware (one-month trial offer). Since July 1, 2007, ILWIS has been distributed as an open source software under the GPL-2.0-only license.
Release history
This table is based on .
Features
ILWIS uses GIS techniques that integrate image processing capabilities, a tabular database and conventional GIS characteristics.
The major features include:
Integrated raster and vector design
On-screen digitizing
Comprehensive set of image processing and remote sensing tools like extensive set of filters, resampling, aggregation, classifications. etc...
Orthophoto, image georeferencing, transformation and mosaicing
Advanced modeling and spatial data analysis
3D visualization with interactive zooming, rotation and panning. "Height" information can be added from multiple types of sources and isn't limited to DEM information.
Animations of spatial temporal data stacks with the possibility of synchronization between different animations.
Rich map projection and geographic coordinate system library. Optionally custom coordinate systems and on the fly modifications can be added.
Geostatistical analyses, with Kriging for improved interpolation
Import and export using the GDAL/OGR library
Advanced data management
Stereoscopy tools - To create a stereo pair from two aerial photographs
Transparencies at many levels (whole maps, selections, individual elements or properties) to combine different data sources in a comprehensive way.
Various interactive diagramming options: Profile, Cross section visualization, Hovmoller diagrams
Interactive value dependent presentation of maps ( stretching, representation)
Hydrologic Flow Operations
Surface energy balance operations through the SEBS module
GARtrip import - Map Import allows the import of GARtrip Text files with GPS data
Spatial Multiple Criteria Evaluation (SMCE)
Space time Cube. Interactive visualization of multiple attribute spatial temporal data.
DEM operations including iso line generation
Variable Threshold Computation, to help preparing a threshold map for drainage network extraction
Horton Statistics, to calculate the number of streams, the average stream length, the average area of catchments for Strahler stream orders
Georeference editors
Roadmap
The next major version of Ilwis will be based on the Ilwis NG framework (in development). This framework aims at being a connection hub between various heterogeneous data and processing sources (local-remote). Integrating them in a consistent way and presenting them in a unified to the end users (at both programming and user interface level). The framework will be cross platform (ILWIS is now limited to Windows only) and will be deployable on mobile devices.
See also
List of GIS software
References
External links
ITC ILWIS site
ILWIS users' website
Free GIS software
Free software programmed in C++
Windows-only free software
Remote sensing software |
1838840 | https://en.wikipedia.org/wiki/Radeon%20R200%20series | Radeon R200 series | The R200 is the second generation of GPUs used in Radeon graphics cards and developed by ATI Technologies. This GPU features 3D acceleration based upon Microsoft Direct3D 8.1 and OpenGL 1.3, a major improvement in features and performance compared to the preceding Radeon R100 design. The GPU also includes 2D GUI acceleration, video acceleration, and multiple display outputs. "R200" refers to the development codename of the initially released GPU of the generation. It is the basis for a variety of other succeeding products.
Architecture
R200's 3D hardware consists of 4 pixel pipelines, each with 2 texture sampling units. It has 2 vertex shader units and a legacy Direct3D 7 TCL unit, marketed as Charisma Engine II. It is ATI's first GPU with programmable pixel and vertex processors, called Pixel Tapestry II and compliant with Direct3D 8.1. R200 has advanced memory bandwidth saving and overdraw reduction hardware called HyperZ II that consists of occlusion culling (hierarchical Z), fast z-buffer clear, and z-buffer compression. The GPU is capable of dual display output (HydraVision) and is equipped with a video decoding engine (Video Immersion II) with adaptive hardware deinterlacing, temporal filtering, motion compensation, and iDCT.
R200 introduced pixel shader version 1.4 (PS1.4), a significant enhancement to prior PS1.x specifications. Notable instructions include "phase", "texcrd", and "texld". The phase instruction allows a shader program to operate on two separate "phases" (2 passes through the hardware), effectively doubling the maximum number of texture addressing and arithmetic instructions, and potentially allowing the number of passes required for an effect to be reduced. This allows not only more complicated effects, but can also provide a speed boost by utilizing the hardware more efficiently. The "texcrd" instruction moves the texture coordinate values of a texture into the destination register, while the "texld" instruction will load the texture at the coordinates specified in the source register to the destination register.
Compared to R100's 2x3 pixel pipeline architecture, R200's 4x2 design is more robust despite losing one texture unit per pipeline. Each pipeline can now address a total of 6 texture layers per pass. The chip achieves this by using a method known as 'loop-back'. Increasing the number of textures accessed per pass reduces the number of times the card is forced into multi-pass rendering.
The texture filtering capabilities of R200 are also improved over its predecessor. For anisotropic filtering, Radeon 8500 uses a technique similar to that used in R100, but improved with trilinear filtering and some other refinements. However, it is still highly angle-dependent and the driver sometimes forces bilinear filtering for speed. NVIDIA's GeForce4 Ti series offered a more accurate anisotropic implementation, but with a greater performance impact.
R200 has ATI's first implementation of a hardware-accelerated tessellation engine (a.k.a. higher order surfaces), called Truform, which can automatically increase the geometric complexity of 3D models. The technology requires developer support and is not practical for all scenarios. It can undesirably round-out models. As a result of very limited adoption, ATI dropped TruForm support from its future hardware.
Performance
Radeon 8500's biggest initial disappointment was its early driver releases. At launch, the card's performance was below expectations and it had numerous software flaws that caused problems with games. The chip's anti-aliasing support was only functional in Direct3D and was very slow. To dampen excitement for 8500, competitor nVidia released their Detonator4 driver package on the same day as most web sites previewed the Radeon 8500. nVidia's drivers were of better quality, and they also further boosted the GeForce3's performance.
Several hardware review sites noted anomalies in actual game tests with the Radeon 8500. For example, ATI was detecting the executable "Quake3.exe" and forcing the texture filtering quality to a much lower level than normally produced by the card, presumably in order to improve performance. HardOCP was the first hardware review web site to bring the issue to the community, and proved its existence by renaming all instances of "Quake" in the executable to "Quack."
However, even with the Detonator4 drivers, the Radeon 8500 was able to outperform the GeForce3 (which the 8500 was intended to compete against) and in some circumstances its faster revision, the Ti500, the higher clocked derivative Nvidia had rolled out in response to the R200 project. Later, driver updates helped to further close the performance gap between the 8500 and the Ti500, while the 8500 was also significantly less expensive and offered additional multimedia features such as dual-monitor support. Though the GeForce3 Ti200 did become the first DirectX 8.0 card to offer 128 MB of video memory, instead of the common 64 MB norm for high-end cards of the time, it turned out that the GeForce3's limitations prevented it from taking full advantage of it, while the Radeon 8500 was able to more successfully exploit that potential.
In early 2002, to compete with the cheaper GeForce3 Ti200 and GeForce4 MX 460, ATI launched the slower-clocked 8500 LE (later re-released as the 9100) which became popular with OEMs and enthusiasts due to its lower price, and overclockability to 8500 levels. Though the GeForce4 Ti4600 took the performance crown, it was a top line solution that was priced almost double that of the Radeon 8500 (MSRP of $350–399 versus US$199), so it didn't offer direct competition. With the delayed release of the potentially competitive GeForce4 Ti4200, plus ATI's initiative in rolling out 128 MB versions of the 8500/LE kept the R200 line popular among the mid-high performance niche market. The greater features of the All-In-Wonder (AIW) Radeon 8500 DV and the AIW Radeon 8500 128 MB proved superior to Nvidia's Personal Cinema equivalents which used the faster GeForce 3 Ti500 and GeForce4 Ti4200.
Over the years the dominant market position of GeForce 3/4 meant that not many games targeted the superior DX8.1 PS 1.4 feature level of the R200, but those that did could see significant performance gains over DX8, as certain operations could be processed in one instead of multiple passes. In these cases the Radeon 8500 may even compete with the newer GeForce4 series running a DX8 codepath. An example for such a game with multiple codepaths is Half-Life 2.
Radeon 8500 came with support for TruForm, an early implementation of Tessellation.
Implementations
Radeon 8500/8500 LE/9100
ATI's first R200-based card was the Radeon 8500, launched in October 2001. In early 2002, ATI launched the Radeon 8500 LE (re-released later as the Radeon 9100), an identical chip with a lower clock speed and slower memory. Whereas the full 8500 was clocked at 275 MHz core and 275 MHz RAM, the 8500LE was clocked more conservatively at 250 MHz for the core and 200 or 250 MHz for the RAM. Both video cards were first released in 64 MB DDR SDRAM configurations; the later 128 MB Radeon 8500 boards received a small performance boost resulting from a memory interleave mode.
In November 2001 was the release of the All-In-Wonder Radeon 8500 DV, with 64 MB and a slower clock speed like the 8500 LE. In 2002, three 128 MB cards were rolled out, the Radeon 8500, 8500 LE, and the All-In-Wonder Radeon 8500 128 MB, which was clocked at full 8500 speeds but had fewer video-related features than the AIW 8500 DV. ATI claimed that the lower clock speed for the 8500DV was due to the FireWire interface.
In late 2002, the Radeon 9100 was announced to satisfy strong market demand for products based on the R200 architecture.
Radeon 8500 XT (canceled)
An updated chip, the Radeon 8500 XT (R250) was planned for a mid-2002 release, to compete against the GeForce4 Ti line, particularly the top line Ti4600 (which retailed for an MSRP of $350–399 USD). Prerelease information touted a 300 MHz core and RAM clock speed for the "R250" chip.
A Radeon 8500 running at 300 MHz clock speeds would have hardly defeated the GeForce4 Ti4600, let alone a newer card from NVIDIA. At best it could have been a better performing mid-range solution than the lower-complexity Radeon 9000 (RV250, see below), but it would also have cost more to produce and would have been poorly suited to the Radeon 9000's dual laptop/desktop roles due to die size and power draw. Notably, overclockers found that Radeon 8500 and Radeon 9000 could not reliably overclock to 300 MHz without additional voltage, so undoubtedly R250 would have had similar issues because of its greater complexity and equivalent manufacturing technology, and this would have resulted in poor chip yields, and thus, higher costs.
ATI, perhaps mindful of what had happened to 3dfx when they took focus off their "Rampage" processor, abandoned the R250 refresh in favor of finishing off their next-generation DirectX 9.0 card which was released as the Radeon 9700. This proved to be a wise move, as it enabled ATI to take the lead in development for the first time instead of trailing NVIDIA. The new Radeon 9700 flagship, with its next-generation architecture giving it unprecedented features and performance, would have been superior to any R250 refresh, and it easily took the performance crown from the Ti4600.
Radeon 9000
The Radeon 9000 (RV250) was launched alongside the Radeon 9700. The 9000 succeeded the Radeon 7500 (RV200) in the mainstream market segment, with the latter being moved to the budget segment. This chip was a significant redesign of R200 to reduce cost and power usage. Among hardware removed is one of the two texture units, the "TruForm" function, Hierarchical-Z, the DirectX 7 TCL unit and one of the two vertex shaders. In games, the Radeon 9000 performs similarly to the GeForce4 MX 440. Its main advantage over the MX 440 was that it had a full DirectX 8.1 vertex and pixel shader implementation. While the 9000 was not quite as fast as the 8500LE or the Nvidia GeForce3 Ti200, the 8500LE and Ti200 were to be discontinued, though the former was reintroduced due to strong market demand.
Radeon 9200
A later revision of the 9000 was the Radeon 9200 (RV280) released April 16, 2003, which aside from supporting AGP 8X, was identical. There was also a cheaper version, the 9200SE, which had a 20% lower clock speed and only had a 64-bit memory bus. Another board, called the Radeon 9250 was launched in July 2004, being simply a slightly lower-clocked RV280.
ATI had re-branded its products in 2001, intending the 7xxx series to indicate DirectX 7.0 capabilities, 8xxx for DirectX 8.1, and so on. However, in naming the Radeon 9000/9200, which only had DirectX 8.1 rendering features, ATI advertised them as "DirectX 9.0 compatible" while the truly DirectX 9.0-spec Radeon 9700 was "DirectX 9.0 compliant".
Laptop versions
The Mobility Radeon 9000 was launched in early summer 2002 and was the first DirectX 8 laptop chip. It outperformed the DirectX 7-based nVidia GeForce 2 Go and was more feature-rich than the GeForce 4 Go.
A Mobility Radeon 9200 later followed as well, derived from the desktop 9200. The Mobility Radeon 9200 was also used in many Apple laptops, including the Apple iBook G4.
Models
Drivers
Unix-related operating systems
The open source drivers from X.org/Mesa support almost all features provided by the R200 hardware. They are shipped by default on most BSDs and Linux systems. Newer ATI Catalyst drivers do not offer support for any R500 or older architecture product.
The PowerPC-based Mac mini and iBook G4, which run on Mac OS X, were supplied with Radeon 9200 GPUs; the final Power Mac G4 "Mirrored Drive Door" systems had the 9000 and 9000 Pro cards available as a BTO option.
Windows drivers
This series of Radeon graphics cards is supported by AMD under Microsoft Windows operating systems including Windows XP (except x64), Windows 2000, Windows Me, and Windows 98. Other operating systems may have support in the form of a generic driver that lacks complete support for the hardware. Driver development for the R200 line ended with the Catalyst 6.11 drivers for Windows XP.
Classic Mac OS
The Radeon 9250 was the final ATI card to officially support Mac OS 9.
AmigaOS
The R200 series of Radeon graphics cards is supported by the Amiga operating system, Release 4 and higher. 2D graphics are fully supported by all cards in the family, with 3D acceleration support for the 9000, 9200, and 9250-series of cards.
MorphOS
The R200 series of Radeon graphics cards is supported by MorphOS
See also
ATI Technologies
Comparison of ATI Graphics Processing Units
List of AMD graphics processing units
References
Sources
"ATI Radeon 8500 64 MB Review (Part 1)" by Dave Baumann, Beyond3D.Com, March 29, 2002, retrieved January 14, 2006
"ATI Radeon 8500 64 MB Review (Part 2)" by Dave Baumann, Beyond3D.Com, April 4, 2002, retrieved January 14, 2006
"ATI RADEON 9100 Based Graphics Cards Review: Gigabyte and PowerColor Solutions" by Tim Tscheblockov, X-Bit Labs, February 5, 2003, retrieved January 9, 2006
"ATI's Radeon 8500 & 7500: A Preview" by Anand Lal Shimpi, Anandtech, August 14, 2001, retrieved January 9, 2006
"ATI's Radeon 8500: She's got potential" by Anand Lal Shimpi, Anandtech, October 17, 2001, retrieved January 9, 2006
"ATI R200 Chip Details" by Beyond3D, retrieved August 30, 2010
"ATI RV250 Chip Details" by Beyond3D, retrieved August 30, 2010
"ATI RV280 Chip Details" by Beyond3D, retrieved August 30, 2010
External links
techPowerUp! GPU Database
ATI Technologies products
Computer-related introductions in 2001
Graphics cards |
20527175 | https://en.wikipedia.org/wiki/ADMB | ADMB | ADMB or AD Model Builder is a free and open source software suite for non-linear statistical modeling. It was created by David Fournier and now being developed by the ADMB Project, a creation of the non-profit ADMB Foundation. The "AD" in AD Model Builder refers to the automatic differentiation capabilities that come from the AUTODIF Library, a C++ language extension also created by David Fournier, which implements reverse mode automatic differentiation. A related software package, ADMB-RE, provides additional support for modeling random effects.
Features and use
Markov chain Monte Carlo methods are integrated into the ADMB software, making it useful for Bayesian modeling. In addition to Bayesian hierarchical models, ADMB provides support for modeling random effects in a frequentist framework using Laplace approximation and importance sampling.
ADMB is widely used by scientists in academic institutions, government agencies, and international commissions, most commonly for ecological modeling. In particular, many fisheries stock assessment models have been built using this software. ADMB is freely available under the New BSD License,
with versions available for Windows, Linux, Mac OS X, and OpenSolaris operating systems. Source code for ADMB was made publicly available in March 2009.
History and background
Implementation
Work by David Fournier in the 1970s on development of highly parameterized
integrated statistical models in fisheries motivated the
development of the AUTODIF Library, and ultimately ADMB.
The likelihood equations
in these models are typically non-linear and estimates of the
parameters are
obtained by numerical methods.
Early in Fournier's work, it became clear that general numerical
solutions to these likelihood problems could only be reliably
achieved using function minimization algorithms that
incorporate accurate information about the gradients of the likelihood
surface. Computing the gradients (i.e. partial derivatives
of the likelihood with respect to all model variables) must also be done with
the same accuracy as the likelihood computation itself.
Fournier developed a protocol for writing code to compute the required
derivatives based on the chain rule of differential calculus. This
protocol is very similar to the suite of methods that came to be known
as ``reverse mode automatic differentiation''
.
The statistical models using these methods
typically included eight constituent code segments:
the objective function;
adjoint code to compute the partial derivatives of the objective function with respect to the parameters to be estimated;
dedicated memory to contain intermediate data for derivative computations, known as the "gradient stack", and the software to manage it;
a function minimizer;
an algorithm to check that the derivatives are correct with respect to finite difference approximations;
an algorithm to insert model parameters into a vector that can be manipulated by the function minimizer and the corresponding derivative code;
an algorithm to return the parameter values to the likelihood computation and the corresponding derivative code; and
an algorithm to compute the second partial derivatives of the objective unction with respect to the parameters to be estimated, the Hessian matrix.
Model developers are usually only interested in the first of these
constituents. Any programming tools that can reduce the overhead of
developing and maintaining the other seven will greatly increase their
productivity.
Bjarne Stroustrup began development of C++ in the 1970s at Bell Labs as an
enhancement to the C programming language. C++ spread widely, and by
1989, C++ compilers were available for personal computers.
The polymorphism of C++ makes it possible to envisage a programming
system in which all mathematical operators and functions can be
overloaded to automatically compute the derivative contributions of
every differentiable numerical operation in any computer program.
Otter Research
Fournier formed Otter Research Ltd. in 1989, and
by 1990 the AUTODIF Library included special classes for derivative computation and
the requisite overloaded functions for all C++ operators and
all functions in the standard C++ math library.
The AUTODIF Library automatically computes the derivatives of the objective function
with the same accuracy as the objective function itself and thereby
frees the developer from the onerous task of writing
and maintaining derivative code for statistical models.
Equally important from the standpoint of model development,
the AUTODIF Library includes a "gradient stack",
a quasi-Newton function minimizer, a derivative checker, and
container classes for vectors and matrices.
The first application of the AUTODIF Library was published in 1992
The AUTODIF Library does not, however, completely liberate the developer from
writing all of the model constituents listed above. In 1993, Fournier further
abstracted the writing of statistical models by creating ADMB, a special
"template" language to simplify model specification by
creating the tools to
transform models written using the templates into the AUTODIF Library
applications. ADMB produces code to manage the exchange of model
parameters between the model and the function minimizer,
automatically computes the Hessian matrix and inverts it to provide
an estimate the covariance of the estimated parameters. ADMB thus
completes the liberation of the model developer from all of the tedious
overhead of managing non-linear optimization, thereby freeing him or her to
focus on the more interesting aspects of the statistical model.
By the mid-1990s, ADMB had earned acceptance by researchers working on
all aspects of resource management. Population models based on the
ADMB are used to monitor a range of both endangered
species and commercially valuable fish populations including
whales, dolphins,
sea lions, penguins, albatross, abalone, lobsters, tunas, marlins,
sharks, rays, anchovy, and pollock. ADMB has also been
used to reconstruct movements of many species of animals tracked with
electronic tags.
In 2002, Fournier teamed up with Hans Skaug to introduce random
effects into ADMB. This
development included automatic computation of second and third
derivatives and the use of forward mode automatic differentiation followed by
two sweeps of reverse model AD in certain cases.
ADMB Project
In 2007, a group of ADMB users that included John Sibert, Mark
Maunder and Anders Nielsen became concerned about ADMB's long-term
development and maintenance. An agreement was reached with
Otter Research to sell the copyright to ADMB for the purpose of
making ADMB an open-source project and distributing it without
charge. The non-profit ADMB Foundation was created
to coordinate development and promote use of ADMB.
The ADMB Foundation drafted a proposal to the Gordon and Betty Moore
Foundation for the funds to purchase ADMB from Otter Research.
The Moore Foundation provided a grant to
the National Center of Ecological Analysis and Synthesis
at the University of California at Santa Barbara
in late 2007 so that the Regents of the University of California could
purchase the rights to ADMB.
The purchase was completed in mid-2008, and the complete ADMB libraries were posted
on the ADMB Project website in December 2008. By May 2009, more
than 3000 downloads of the libraries had occurred. The
source code was made available in December 2009. In mid-2010,
ADMB was supported on all common operating systems (Windows,
Linux, MacOS and Sun/SPARC), for all common C++ compilers
(GCC, Visual Studio, Borland), and for both 32 and 64 bit
architectures.
ADMB Foundation efforts during the first two years of the ADMB
Project have focused on
automating the building of ADMB for different platforms,
streamlining installation, and creation of
a user-friendly working environments. Planned technical
developments
include parallelization of internal computations,
implementation of hybrid MCMC, and improvement of the large sparse matrix
for use in random effects models.
See also
List of statistical packages
List of numerical analysis software
Comparison of numerical analysis software
References
External links
For downloads of installers, manuals and source code: The ADMB Project
To support the ADMB Project: The ADMB Foundation
Original developer of ADMB: Otter Research Ltd
Array programming languages
Cross-platform free software
Free statistical software
Numerical analysis software for Linux
Numerical programming languages
Statistical programming languages |
36812097 | https://en.wikipedia.org/wiki/Profanity%20%28instant%20messaging%20client%29 | Profanity (instant messaging client) |
Profanity is a text mode instant messaging interface that supports the XMPP protocol. It supports Linux, macOS, Windows (via Cygwin or WSL), FreeBSD, and Android (via Termux).
Packages are available in Debian, Ubuntu and Arch Linux distributions.
Features include multi-user chat, desktop notifications, Off The Record and OMEMO message encryption.
References
External links
Linux Format Issue 164 Hotpicks
Free XMPP clients
Free instant messaging clients
Instant messaging clients for Linux |
15546132 | https://en.wikipedia.org/wiki/PowerVM%20Lx86 | PowerVM Lx86 | PowerVM Lx86 was a binary translation layer for IBM's System p servers. It enabled 32-bit x86 Linux binaries to run unmodified on the Power ISA-based hardware. IBM used this feature to migrate x86 Linux servers to the PowerVM virtualized environment; it was supported on all POWER5 and POWER6 hardware as well as BladeCenter JS21 and JS22 systems.
In contrast to regular emulators only the instructions are translated, not the entire system, thus making it fast and flexible. The Lx86 software senses that it is executing x86 code and translates it to PowerPC code at execution, and these instructions are later cached ensuring that the translation process only has to take place once, further reducing the drop in performance usually associated with emulation. Lx86 does not support applications that access hardware directly, like kernel modules. Earlier versions of Lx86 did not run code that makes use of SSE instructions, though as of version 1.3.2 the SSE and SSE2 instruction sets were supported.
The product was at first marketed as System p AVE (System p Application Virtual Environment) and was incorrectly reported as PAVE (Portable Advanced Virtualization Emulator) in the press but the name has since changed to PowerVM Lx86. Lx86 was based on the QuickTransit dynamic translator from Transitive, the same that Apple uses for its Rosetta emulation layer that enables Mac OS X to run unmodified PowerPC binaries on their Intel-based Macintoshes.
All versions and releases of the Lx86 product were withdrawn from marketing in September 2011, with support discontinued in April 2013.
References
PowerVM Lx86 for x86 Linux applications
Red Book – Getting started with PowerVM Lx86
White paper – x86 Linux application consolidation on Power Systems platforms using IBM virtualization technologies
PowerVM Lx86 on IBM's developerWorks
IBM's press release 2007-04-23
Transitive's press release 2007-04-23
X86 applications on IBM's PowerPC servers - Heise online
IBM Opens Up Beta for PAVE Linux Runtime on Power Chips, The Four Hundred
IBM US Withdrawal Announcement 911-170
Virtualization software
X86 emulators
IBM software |
30440521 | https://en.wikipedia.org/wiki/BSD%20%28disambiguation%29 | BSD (disambiguation) | BSD is the Berkeley Software Distribution, a free Unix-like operating system, and numerous variants.
BSD may also refer to:
Science and technology
Bipolar spectrum disorder
Birch and Swinnerton-Dyer conjecture, an important unsolved problem in mathematics
Computing
BSD licenses, a family of permissive free software licenses originally from the Berkeley Software Distribution
Bit stream decoder, a video decoder in a graphics processing unit
Organizations
Birsa Seva Dal, a political group in India
Bob- und Schlittenverband für Deutschland, the bobsleigh, luge, and skeleton federation for Germany
Blue State Digital, a new media strategy and technology firm
Cray Business Systems Division, or Cray BSD
Schools
Beaverton School District, a school district in Beaverton, Oregon, US
Bellevue School District, the school district of Bellevue, Washington, US
Benoit School District, the school district of Benoit, Mississippi, US
Brandywine School District, a school district in New Castle County, Delaware, US
Burlingame School District, a school district in Burlingame, California, US
Places
Bumi Serpong Damai, a planned city in Greater Jakarta, Indonesia
Baoshan Yunduan Airport (IATA code), China
Other uses
BSD Records, a 1950s record label
Black Spiral Dancer, a Tribe of evil-aligned werewolves in the White Wolf produced role-playing game Werewolf: The Apocalypse
Besiyata Dishmaya, BS"D, an Aramaic phrase meaning "with the help of Heaven"
Bahamian dollar (ISO 4217 code)
Bungo Stray Dogs, a Japanese manga and anime series.
See also
Berkeley Software Design (BSDi), a former corporation which developed, sold and supported BSD/OS
BSD/OS, originally called BSD/386 and sometimes known as BSDi, a proprietary version of the BSD operating system developed by Berkeley Software Design
List of BSD operating systems
Blue Screen of Death (BSoD), an error screen displayed after a fatal system error
Bipolar Spectrum Diagnostic Scale (BSDS) |
22808542 | https://en.wikipedia.org/wiki/National%20e-Governance%20Plan | National e-Governance Plan | The National e-Governance Plan (NeGP) is an initiative of the Government of India to make all government services available to the citizens of India via electronic media. NeGP was formulated by the Department of Electronics and Information Technology (DeitY) and Department of Administrative Reforms and Public Grievances (DARPG). The Government approved the National e-Governance Plan, consisting of 27 "Mission Mode Projects" (MMPs) and 8 components (now 31, 4 new added in 2011 viz Health, Education, PDS & Posts), on 18 May 2006. This is an enabler of Digital India initiative, and UMANG (Unified Mobile Application for New-age Governance) in turn is an enabler of NeGP.
Meta data and data standards or MDDS is the official document describing the standards for common metadata as part of India's National e-Governance Plan.
The plan
Background
The 11th report of the Second Administrative Reforms Commission, titled "Promoting e-Governance - The Smart Way Forward", established the government's position that an expansion in e-Government was necessary in India. The ARC report was submitted to the Government of India on 20 December 2008. The report cited several prior initiatives as sources of inspiration, including references to the Singapore ONE programme. To pursue this goal, the National e-Governance Plan was formulated by the Department of Information Technology (DIT) and Department of Administrative Reforms & Public Grievances (DAR&PG). The program required the development of new applications to allow citizen access to government services through Common Service Centers; it aimed to both reduce government costs and improve access to services.
Criticism
Lack of needs analysis, business process reengineering, interoperability across MMPs, and coping with new technology trends (such as mobile interfaces, cloud computing, and digital signatures) were some of the limitations of the initiative.
References
See also
UMANG
My Gov
Citations
External links
National eGovernance Website
Informatics An eGovernance publication from National Informatics Centre
Saaransh – A compendium of Mission Mode Projects under NeGP
Ministry of Communications and Information Technology (India)
Internet in India
Technology in society
E-government in India |
44599012 | https://en.wikipedia.org/wiki/Vocaloid%204 | Vocaloid 4 | Vocaloid 4 is a singing voice synthesizer and successor to Vocaloid 3 in the Vocaloid series.
History
In October 2014, the first Vocaloid confirmed for the Vocaloid 4 engine was the English vocal Ruby. Its release was delayed so it could be released on the newer engine. A Vocaloid 4 version for OS X has also been released. All AH-Software vocals were also announced as receiving updated packages as well as VY1v4. The update of the AH-Software Vocaloid 2 vocals is related to Windows 10 being released in 2015, and the impact it may have on the Vocaloid 2 software.
The Vocaloid 4 engine allows the importing of Vocaloid 2 and Vocaloid 3 vocals, though Vocaloid 2 vocals must have already been imported into Vocaloid 3 for this to work. The new engine includes other features, but not all of them are accessible by Vocaloid 2 and Vocaloid 3 vocals. One of the new features is a "Growl" feature which allows vocals built for Vocaloid 4 to take on a growl-like property in their singing results. Cross-synthesis was also added, allowing the user to switch between two of a character's vocals built for the same language smoothly using a time-varying parameter. This feature is accessible when using Vocaloid 3 and Vocaloid 4 vocals, but not Vocaloid 2. Cross-synthesis only works with vocals from the same character, making use with packages like VY1v4 possible, but not packages such as Anon and Kanon. Another feature included is Pitch Rendering, which all imported vocals can use. This displays the effective pitch curve on the user interface. Finally, real-time input has been included in this version and is another feature which all vocals can use. VSQX files made for Vocaloid 3 work in Vocaloid 4, but they will not work the other way around.
For Japanese users who had bought Vocaloid 3, Vocaloid 3 Editor, Vocaloid Editor for Cubase or VY1v3, Yamaha offered a free upgrade for each software. Those who bought the editor after November 10, 2014, were also offered a free upgrade until June 2015. Those who wished to use Vocaloid on a Macintosh, however were offered only the Vocaloid Editor for Cubase as an option; use of the Vocaloid editor on Mac as a normal Vocaloid 4 adaption was not offered.
One of the highlights of the cross-synthesis function is that it can produce very different results depending on which vocals are mixed. Megpoid V4's two vocals "Native" and "NativeFat" will not produce much difference between them. However, by mixing Megpoid V4 vocals "Power" and "Sweet" the function will produce a very different result.
When it was first released, it was only available in Japanese or English, despite being fully capable of using Chinese, Korean, or Spanish vocals from Vocaloid 3. This was similar to how with the original Vocaloid software functioned, in that its interface was only in English, despite having vocals for other languages. Despite the release of Vocaloid 4, Chinese vocals Yuezheng Ling and Zhanyin Lorra were also developed as releases intended for Vocaloid 3.
For the voice "Cyber Diva", some long-term bugs were fixed and pronunciations addressed. English Vocaloid libraries now use a new shorter, more effective script. A custom library was made for the vocal, the largest ever made, which caused an issue with the dictionary proxy that was addressed. English vocals now have more clarity, at the price of expressive tones. Due to the mislabeling of certain phonetic symbols, all past English built vocals were reported to have incorrect sounds assigned to certain symbols; this has been addressed with the new script.
Several adaptations of the Vocaloid 4 have been released. A Mobile version of the editor was released in 2015. Another adaption of the software came in the form of "Unity with Vocaloid", a version of the engine which allowed it to synthesis vocals in real-time within the Unity Engine. The job plug-in "Vocalistener" also received an upgrade, with an upgrade offer for Vocaloid 3 version owners offered.
Vocaloid 4 was the last version released under Hideki Kenmochi, as he announced his retirement on January 30, 2015. Vocaloid has continued its development. He was replaced by Katsumi Ishikawa.
In addition to the Vocaloid 4, the software itself also saw use in the Vocaloid Keyboard, which had first been announced in 2012, though prototypes of the Keyboard were finally unveiled in mid-2015 though did not see commercial release.
Products
VY1v4
The latest version of the VY1 product, VY1v4 contains 4 voices, "Natural", "Normal", "Power" and "Soft." VY1v4 was released on December 17, 2014, for both PC and Mac operating systems. Those who bought the previous version were offered an upgrade discount which was included on the Vocaloid shop.
Cyber Diva
Cyber Diva is an American-accented female vocal released on February 4, 2015. The voice provider is Jenny Shima, who is an American singer, theater actress, and model.
A version of this vocal was also added to the Mobile Vocaloid Editor app, making it the first English vocal for the app at the time of release, and for quite a while the only vocal on the English app. Despite this, the app was still using a Japanese interface at the time of release.
Yuzuki Yukari V4
An update on the Vocaloid 3 product "Yuzuki Yukari", this update, released March 18, 2015, contains three vocals, "Jun", "Onn" and "Lin". "Jun" being a faithful recreation of the previous vocal, "Onn" being a soft-type vocal, and "Lin" being a power-type vocal. The vocals can be purchased individually or as a complete package. In addition, it was announced that all three vocals were would be released for the "Unity with Vocaloid" version.
All three vocals also appeared on the Mobile Vocaloid Editor app, each sold separately.
The Yuzuki Yukari voice was later developed into the Vocaloid Keyboard.
Megurine Luka V4X
An update on the Vocaloid 2 product "Megurine Luka" that was released March 19, 2015, on both PC and Mac. It comes with four Japanese vocals, "Soft" and "Hard", as well as E.V.E.C., plus two English vocals, "Straight" and Soft. The "E.V.E.C." system adds options to change the tones of the Japanese vocals "Soft_EVEC" and "Hard_EVEC" to nine additional Japanese tones: "Power 1", "Power 2", "Native", "Whisper", "Dark", "Husky", "Soft", "Falsetto" and "Cute". "SOFT_EVEC" and "Hard EVEC" can be used for cross-synthesis (XSY), giving Luka four possible XSY vocals for Japanese, and two for English. The vocal is based on breath.
Gackpoid V4
An update of the V3 Gackpoid product announced on March 31, 2015, and released on April 30, 2015.
Nekomura Iroha V4
An update on the Vocaloid 2 product. The voice comes with two libraries: "Natural" and "soft". Released June 18, 2015.
SF-A2 miki V4
An update on the Vocaloid 2 product. Released June 18, 2015.
V4 Flower
An update of the V Flower vocal released for Vocaloid 3. Released on July 16, 2015. Those who bought the previous version were offered an upgrade discount, which was included on the Vocaloid shop.
Sachiko
Sachiko is a Japanese female Vocaloid from Yamaha, released on July 27, 2015. The voice actor is the Enka singer Sachiko Kobayashi. It came with a special plug-in for Vocaloid 4 called "Sachikobushi". This adjusts VSQx files to produce a voice like Kobayashi's.
This vocal was later also added to the Mobile Vocaloid editor app.
ARSloid
ARSloid was announced in June 2015, and released on September 23, 2015. It is a male vocal based on the singer Akira Kano from Arsmagna. Product comes with three vocals, "Original", "Soft" and "Bright", allowing cross-synthesis to be used.
Ruby
An American female English vocal developed by independent developer "Prince Syo" in collaboration with Anders of VOCATONE and distributed by PowerFX Systems AB. was released on October 7, 2015.
Kaai Yuki V4
An update on the Vocaloid 2 product released on October 29, 2015. Due to the original voice actor maturing, a new actor was used for this product to make the new samples needed to complete the vocal library.
Hiyama Kiyoteru V4
An update on the Vocaloid 2 product, includes the "Natural" and "rock" Voicebanks; released on October 29, 2015.
Megpoid V4
The update to the Megpoid software. There are a total of 10 vocals for this package, the first five, Native, Adult, Whisper, Sweet, and Power are updates on the old Vocaloid3 versions. The last five are new vocals and consist of Nativefat, MellowAdult, PowerFat, NaturalSweet, and SoftWhisper. Megpoid V4 was released as "complete package" with all ten vocals, or as one of five separate packages with a pair of vocals in each.
The five packages are;
Native, containing the vocals "Native" and "NativeFat"
Adult, containing the vocals "Adult" and "MellowAdult"
Power, containing the vocals "Power" and "PowerFat"
Sweet, containing the vocals "Sweet" and "NaturalSweet"
Soft, containing the vocals "Soft" and "SoftWhisper"
The vocals supplied in each package are especially designed to be cross-synthesis friendly, producing the best results such as "Native" and "NativeFat". However, vocals used with one package when used with the function with vocals from another package produce a very different result. So mixing "Power" and "Sweet" will be almost the equivalent of another vocal library entirely.
Released on November 5, 2015.
The voice was also featured in use on the Vocaloid Keyboard prototype.
Megpoid English is also currently in consideration for an update. Development currently unscheduled but estimated to begin in 2016 or 2017.
Dex
Dex is a male American-accented Vocaloid by Zero-G, and partner to Daina, based on a hound theme and pop-orientation. Dex was released on November 20, 2015.
Dex was also sold for the Mac version of the software, a first for Zero-G Vocaloid software.
Daina
Daina is a female American-accented Vocaloid by Zero-G and the partner vocal to Dex. It is based on a fox theme and is pop-orientated. Released on November 20, 2015.
Daina was also sold alongside Dex for the Mac version of the software, a first for Zero-G Vocaloid software.
Rana V4
Rana was also announced for an upgrade at "The Voc@loid M@ster 33" event. The first 100 visitors to the "Rana Experience" booth were given the chance to obtain Rana V4 early access.
Rana's Vocaloid4 version was released on the 1 December 2015. This voice is identical to the Vocaloid3 version, except for the addition of a growl function. Those who had registered all 30 issues in the magazine Vocalo-P ni Naritai (ボカロPになりたい!) version were offered an upgrade discount on the Vocaloid shop.
Kagamine Rin/Len V4X
An update on the Vocaloid 2 product "Kagamine Rin/Len" was released in Q3, 2015 for both PC and Mac. The vocals are based on tension and strength while being able to control power. They also both come with an English Voicebank. On August 31, 2015, their homepage was launched, confirming their "V4X" status and their use of E.V.E.C.
The package contains updates on the original Vocaloid2 Append vocals ("Power", "Warm" and "Sweet" for Rin and "Power", "Cold and "Serious for Len), with improvements. E.V.E.C. is only possible with their "Power" vocals, in comparison to the past Luka package, only options "Soft" and "Power" are given.
In addition to the main package a separate English vocal package was produced for Kagamine Rin and Len. The package contains natural-sounding vocals for creation of English. They are currently being sold as an expansion pack and are sold bundled with the Kagamine Rin/Len V4X package, or on their own with only the "lite" version of the Vocaloid4 engine.
There have been many comments on the close similarities between Luka's English vocals and Len's English vocals. Both Rin and Len's English vocals are softer and less defined than their Japanese sounds.
The release date was 24 December 2015.
Unity-chan; Kohaku Otori/AKAZA
The Unity-Chan is a feminine Japanese vocal voiced by Japanese novice voice actress Kakumoto Asuka. Unlike previous software, it has two mascots for the single vocal, Kohaku and AKAZA. They were released on the 14 January 2016 for the Unity Engine version of the software "Unity with Vocaloid". A special version of the package called "『C89特別仕様 『VOCALOID4 Library unity-chan!』PROJECT:AKAZA スペシャルパッケージ』" was sold at the COMIKET 89 booth.
They will also be released for the Mobile Vocaloid Editor.
They have special licensing terms. For use of their vocal within projects, the voice will be provided for free as long as a 10 million yen consultation fee is paid. For other public uses of the vocal, a Vocaloid engine fee is required. Due to the Yamaha royalty system, users will have to seek consultation with third parties for use with Vocaloid and other companies' vocals.
Fukase
Fukase is a Japanese and English male Vocaloid whose voice was provided by Satoshi Fukase (深瀬 慧, Fukase Satoshi).
It was released on January 28, 2016, with three vocals: "Normal", "Soft", and "English". This package also comes with a plug-in for Vocaloid 4 called "Electric Tune" which allows the user to add distinct pitching effects to the voice, such as a robotic sound. It also includes a booklet with information about how to operate the VOCALOID4 editor and Fukase's vocal using Sekai No Owari's song "Starlight Paradise".
Fukase's license agreement requires permission before it may be used in commercial products.
Xingchen
Xingchen (星尘), known under the English name "Stardust", is a female Chinese vocal released on April 13, 2016. It was developed by Shanghai HENIAN, voiced by Chalili, and the character was conceptualized as "Quadimensionko", a mascot for the Quadimension album series and group.
Otomachi Una
A Japanese Vocaloid voiced by the Japanese voice actress Aimi Tanaka. It comes with a "sweet" type vocal called "Sugar" and a "powerful" type called "Spicy".
Hatsune Miku V4X
On August 31, 2015, the title of the Hatsune Miku update was revealed. It was released August 31, 2016, nine years after the original Vocaloid 2 release. Like Megurine Luka V4X and Kagamine Rin/Len V4X, it has E.V.E.C. capabilities. It has seven voice-banks "Original", "Solid", "Dark", "Soft", "Sweet", "English", and "Chinese". "Original", "Solid", and "Soft" contain the E.V.E.C. colors and Power~Soft. The English voice-bank is meant to resemble a mixture of the retired append, "vivid", and "Original". As such, it sounds drastically different than the previous English vocal. It can be purchased in a bundle, or as a separate purchase similar to her V3 release and Kagamine Rin/Len V4X.
A seventh vocal "Chinese (Mandarin)" was released September 5, 2017
Uni
Uni is a Korean female Vocaloid created by ST MEDiA that was released in February 2017. It has two expansion vocals in development, "Power" and "Soft". An English package is planned after the completion of these expansions.
Yuezheng Longya
A Chinese Mandarin male vocal released May 10, 2017. It is voiced by the Chinese voice actor, Zhang Jie.
Luo Tianyi V4
Tianyi's update production was officially announced on March 12, 2015, and released on July 15, 2016. In the V4 version, it received a complete re-record of previous lines, and one additional line.
Development on a Japanese voicebank for Luo Tianyi was confirmed to have been restarted on June 19, 2017. It was released on May 18, 2018.
Cyber Songman
A complementary male vocal for Cyber Diva, released in October 2016.
Tohoku Zunko V4
A VOCALOID4 update for Tohoku Zunko was released on October 27, 2016.
Macne Nana V4
An update to Macne Nana Japanese and English was released December 15, 2016, along with a new Vocaloid, Macne Petit. Like Macne Nana, Macne Petit was ported to Vocaloid following its performance on other platforms.
Tone Rion V4
An update to Vocaloid 3 vocal Tone Rion was released February 16, 2017. It is voiced by Nemu Yumemi, who also voices associated Vocaloid Yumemi Nemu.
Yumemi Nemu
A complementary feminine vocal to Tone Rion was released February 16, 2017. Yumemi Nemu's voice provider and namesake is Nemu Yumemi of Dempagumi.inc.
Masaoka Azuki V4
A public release of Private V2 Mobile vocal, Azuki, was released on July 12, 2017.
Kobayashi Matcha V4
A public release of Private V2 Mobile vocal, Matcha, was released on July 12, 2017.
LUMi
A new Vocaloid by the upcoming company, Akatsuki Virtual Artists, named LUMi was released on August 30, 2017.
Xin Hua V4
An update to Vocaloid 3 Xin Hua was released on September 1, 2017, with a Japanese Vocal being released on September 22, 2017.
Kizuna Akari
A Japanese Voiceroid 2 and Vocaloid produced by VOCALOMAKETS, part of AH-Software. The Vocaloid package was released on April 26, 2018.
Mirai Komachi
A new Vocaloid named Mirai Komachi by the company Bandai Namco Studio Inc was released on May 24, 2018.
Zhiyu Moke
A Mandarin Chinese Vocaloid developed by Vsinger. It is voiced by Shangqing Su. The vocal was demonstrated live on June 17, 2017, and was later released alongside Mo Qingxian on August 2, 2018.
Its design was originally unveiled in 2012, following an official contest.
Mo Qingxian
A Mandarin Chinese Vocaloid developed by Vsinger. It is voiced by Mingyue. The vocal was demonstrated live on June 17, 2017, and was later released alongside Zhiyu Moke on August 2, 2018.
The design was originally unveiled in 2012, following an official contest.
Critical reception
In 2015, Zero-G reported that Dex and Daina achieved high download numbers.
Crypton Future Media's download website "Sonicwire" reported that the Megurine Luka V4X product had the number 1 position in Vocaloid sales. This would later be overtaken by the Hatsune Miku V4X Bundle.
References
External links
Speech synthesis software
Vocaloid |
1032254 | https://en.wikipedia.org/wiki/Speaker%20recognition | Speaker recognition | Speaker recognition is the identification of a person from characteristics of voices. It is used to answer the question "Who is speaking?" The term voice recognition can refer to speaker recognition or speech recognition. Speaker verification (also called speaker authentication) contrasts with identification, and speaker recognition differs from speaker diarisation (recognizing when the same speaker is speaking).
Recognizing the speaker can simplify the task of translating speech in systems that have been trained on specific voices or it can be used to authenticate or verify the identity of a speaker as part of a security process. Speaker recognition has a history dating back some four decades as of 2019 and uses the acoustic features of speech that have been found to differ between individuals. These acoustic patterns reflect both anatomy and learned behavioral patterns.
Verification versus identification
There are two major applications of speaker recognition technologies and methodologies. If the speaker claims to be of a certain identity and the voice is used to verify this claim, this is called verification or authentication. On the other hand, identification is the task of determining an unknown speaker's identity. In a sense, speaker verification is a 1:1 match where one speaker's voice is matched to a particular template whereas speaker identification is a 1:N match where the voice is compared against multiple templates.
From a security perspective, identification is different from verification. Speaker verification is usually employed as a "gatekeeper" in order to provide access to a secure system. These systems operate with the users' knowledge and typically require their cooperation. Speaker identification systems can also be implemented covertly without the user's knowledge to identify talkers in a discussion, alert automated systems of speaker changes, check if a user is already enrolled in a system, etc.
In forensic applications, it is common to first perform a speaker identification process to create a list of "best matches" and then perform a series of verification processes to determine a conclusive match. Working to match the samples from the speaker to the list of best matches helps figure out if they are the same person based on the amount of similarities or differences. The prosecution and defense use this as evidence to determine if the suspect is actually the offender.
Training
One of the earliest training technologies to commercialize was implemented in Worlds of Wonder's 1987 Julie doll. At that point, speaker independence was an intended breakthrough, and systems required a training period. A 1987 ad for the doll carried the tagline "Finally, the doll that understands you." - despite the fact that it was described as a product "which children could train to respond to their voice." The term voice recognition, even a decade later, referred to speaker independence.
Variants of speaker recognition
Each speaker recognition system has two phases: Enrollment and verification. During enrollment, the speaker's voice is recorded and typically a number of features are extracted to form a voice print, template, or model. In the verification phase, a speech sample or "utterance" is compared against a previously created voice print. For identification systems, the utterance is compared against multiple voice prints in order to determine the best match(es) while verification systems compare an utterance against a single voice print. Because of the process involved, verification is faster than identification.
Speaker recognition systems fall into two categories: text-dependent and text-independent.
Text-Dependent:
If the text must be the same for enrollment and verification this is called text-dependent recognition. In a text-dependent system, prompts can either be common across all speakers (e.g. a common pass phrase) or unique. In addition, the use of shared-secrets (e.g.: passwords and PINs) or knowledge-based information can be employed in order to create a multi-factor authentication scenario.
Text-Independent:
Text-independent systems are most often used for speaker identification as they require very little if any cooperation by the speaker. In this case the text during enrollment and test is different. In fact, the enrollment may happen without the user's knowledge, as in the case for many forensic applications. As text-independent technologies do not compare what was said at enrollment and verification, verification applications tend to also employ speech recognition to determine what the user is saying at the point of authentication.
In text independent systems both acoustics and speech analysis techniques are used.
Technology
Speaker recognition is a pattern recognition problem. The various technologies used to process and store voice prints include frequency estimation, hidden Markov models, Gaussian mixture models, pattern matching algorithms, neural networks, matrix representation, vector quantization and decision trees. For comparing utterances against voice prints, more basic methods like cosine similarity are traditionally used for their simplicity and performance. Some systems also use "anti-speaker" techniques such as cohort models and world models. Spectral features are predominantly used in representing speaker characteristics. Linear predictive coding (LPC) is a speech coding method used in speaker recognition and speech verification.
Ambient noise levels can impede both collections of the initial and subsequent voice samples. Noise reduction algorithms can be employed to improve accuracy, but incorrect application can have the opposite effect. Performance degradation can result from changes in behavioural attributes of the voice and from enrollment using one telephone and verification on another telephone. Integration with two-factor authentication products is expected to increase. Voice changes due to ageing may impact system performance over time. Some systems adapt the speaker models after each successful verification to capture such long-term changes in the voice, though there is debate regarding the overall security impact imposed by automated adaptation
Legal implications
Due to the introduction of legislation like the General Data Protection Regulation in the European Union and the California Consumer Privacy Act in the United States, there has been much discussion about the use of speaker recognition in the work place. In September 2019 Irish speech recognition developer Soapbox Labs warned about the legal implications that may be involved.
Applications
The first international patent was filed in 1983, coming from the telecommunication research in CSELT (Italy) by Michele Cavazza and Alberto Ciaramella as a basis for both future telco services to final customers and to improve the noise-reduction techniques across the network.
Between 1996 and 1998, speaker recognition technology was used at the Scobey–Coronach Border Crossing to enable enrolled local residents with nothing to declare to cross the Canada–United States border when the inspection stations were closed for the night. The system was developed for the U.S. Immigration and Naturalization Service by Voice Strategies of Warren, Michigan.
In May 2013 it was announced that Barclays Wealth was to use passive speaker recognition to verify the identity of telephone customers within 30 seconds of normal conversation. The system used had been developed by voice recognition company Nuance (that in 2011 acquired the company Loquendo, the spin-off from CSELT itself for speech technology), the company behind Apple's Siri technology. A verified voiceprint was to be used to identify callers to the system and the system would in the future be rolled out across the company.
The private banking division of Barclays was the first financial services firm to deploy voice biometrics as the primary means to authenticate customers to their call centers. 93% of customer users had rated the system at "9 out of 10" for speed, ease of use and security.
Speaker recognition may also be used in criminal investigations, such as those of the 2014 executions of, amongst others, James Foley and Steven Sotloff.
In February 2016 UK high-street bank HSBC and its internet-based retail bank First Direct announced that it would offer 15 million customers its biometric banking software to access online and phone accounts using their fingerprint or voice.
See also
AI effect
Applications of artificial intelligence
Speaker diarisation
Speech recognition
Voice changer
Lists
List of emerging technologies
Outline of artificial intelligence
Notes
References
Homayoon Beigi (2011), "Fundamentals of Speaker Recognition", Springer-Verlag, Berlin, 2011, .
"Biometrics from the movies" –National Institute of Standards and Technology
Elisabeth Zetterholm (2003), Voice Imitation. A Phonetic Study of Perceptual Illusions and Acoustic Success, Phd thesis, Lund University.
Md Sahidullah (2015), Enhancement of Speaker Recognition Performance Using Block Level, Relative and Temporal Information of Subband Energies, PhD thesis, Indian Institute of Technology Kharagpur.
External links
Circumventing Voice Authentication The PLA Radio podcast recently featured a simple way to fool rudimentary voice authentication systems.
Speaker recognition – Scholarpedia
Voice recognition benefits and challenges in access control
Software
bob.bio.spear
ALIZE
Speech processing
Voice technology
Automatic identification and data capture
Biometrics |
38350849 | https://en.wikipedia.org/wiki/Kentucky%20Route%20Zero | Kentucky Route Zero | Kentucky Route Zero is a point-and-click adventure game developed by Cardboard Computer and published by Annapurna Interactive. The game was first revealed in 2011 via the crowd-funding platform Kickstarter and is separated into five acts that were released sporadically throughout its development; the first releasing in January 2013 and the last releasing in January 2020. The game was originally developed for Linux, Microsoft Windows, and OS X, with console ports for the Nintendo Switch, PlayStation 4, and Xbox One under the subtitle of "TV Edition", coinciding with the release of the final act.
Kentucky Route Zero follows the narrative of a truck driver named Conway and the strange people he meets as he tries to cross the mysterious Route Zero in Kentucky to make a final delivery for the antiques company for which he works. The game received acclaim for its visual art, narrative, characterization, atmosphere and themes, appearing on several best-of-the-decade lists.
Gameplay
Kentucky Route Zero is a point and click game that contains text-driven dialogue. There are no traditional puzzles or challenges, with the focus of the game being story-telling and atmosphere. The player controls Conway by clicking on the screen, either to guide him to another location, or interact with other characters and objects. The player also has the choice to choose Conway's dialogue, and occasionally the dialogue of other characters, during in-game conversations. The game is separated into various locations, between which Conway can travel using his truck. A map is shown when traveling on the road, and the player must guide the truck icon to the destination of their choosing, mostly areas where the player has been pointed or sent out to go. The player also takes control of other characters at certain times.
Plot
Conway, a truck driver, works as a delivery man for an antique shop owned by a woman named Lysette. Being hired to make a delivery to 5 Dogwood Drive, Conway travels the roads around Interstate 65 in Kentucky to locate the address, accompanied by his dog, whose name is chosen by the player. After searching around, Conway elaborates that he is lost and stops off by a gas station, Equus Oils.
Act I
Conway arrives in the Equus Oils station and meets an old man named Joseph, who is the owner of the establishment. Joseph informs Conway that the only way to arrive at Dogwood Drive is by taking the mysterious Route Zero, and then tasks him to fix the circuit breaker to restore power in the station and use the computer to locate directions. Conway goes underneath the station and meets three people who are playing a strange game and ignore him completely. He is able to retrieve their lost 20-sided die but soon notices their disappearance afterwards, clearing a way to fix the electricity. When asking Joseph about the strange people who disappeared, he suggests Conway may have been hallucinating. Conway uses the computer to locate the directions of the Márquez Farm to talk to Weaver Márquez, who has a better understanding of the roads. As Conway leaves, Joseph tells him that he loaded a TV into the back of the truck to take to Weaver. Conway drives to the Márquez residence and meets Weaver. Weaver quizzically asks Conway a number of questions and Conway finally asks her about directions to Route Zero. She has Conway set up the TV and when Conway looks into the screen, he sees the vision of a strange farm and spaces out. When he wakes, Weaver informs him of her cousin Shannon who fixes TVs and gives him the directions to Route Zero, and suddenly disappears.
When arriving at the destination, Conway finds the area to actually be an abandoned mine shaft called Elkhorn Mine. He locates Shannon Márquez, who has been exploring the mines in search of something she has lost. Conway decides to help Shannon travel deeper into the mine, and begins toying with a PA system to test the depth and length of the tunnels. Unfortunately, the sound waves cause a portion of the mine to collapse. Conway injures his leg from falling rubble, and Shannon uses a track to help them travel through the mine. While exploring the mine, Shannon reveals the mine's tragic history, involving the deaths of many miners due to flooding. If the lamplight is turned off during the travel, ghostly visions of miners can be seen wandering the caves. Before exiting the mines, Shannon leaves Conway and travels a bit farther down the mine shaft, and comes across a heap of miner helmets. She comes back quickly without explaining anything. Conway and Shannon travel to Shannon's workshop, and then back to the Márquez Farm, where Shannon reveals that the Márquez family's debts had caused Weaver to flee. As Shannon attempts to fix the old TV, Conway looks in again. This time the picture of the farm begins to warp and separate, causing the screen to create an image of the opening to Route Zero and the truck driving down it.
Act II
Act II opens with a prelude in which Lula Chamberlain, an installation artist whose work is featured in the Kentucky Route Zero bonus content Limits & Demonstrations, receives a rejection notice from the Gaston Trust for Imagined Architecture. After reading this notice, Chamberlain sorts through a series of proposals for reclaiming spaces for purposes alternate to their current function, such as a proposal to reclaim a basketball court as a dog kennel.
Following the prelude, the focus returns to Conway, Shannon, and Conway's dog. The three arrive at a six-story building known as the Bureau of Reclaimed Spaces. In the lobby they are told that in order to receive directions to Dogwood Drive they must first obtain an ingestion notice from within the Bureau. The receptionist suggests they seek out Lula Chamberlain, currently the Bureau's senior clerk. After a series of bureaucratic misdirections, the three manage to meet with Lula. She informs them that the directions to Dogwood Drive are at an off-site storage facility within an old church. Additionally she suggests Conway should seek out Doctor Truman for treatment of his injured leg. At the storage facility Conway chats about hobbies with the caretaker of the building and listens to a prerecorded sermon on the virtue of hard work while Shannon finds the record they are seeking. As they leave the building Conway collapses from his injury, hallucinating about Elkhorn Mine, and Shannon decides their first priority should be to find Doctor Truman and obtain treatment.
Upon their return, the receptionist at the Bureau tells the group that Doctor Truman can be found at his house off the highway. The group leaves Route Zero and goes back above ground in search of Doctor Truman. Arriving at the site, the group discovers that the doctor's house has been torn down and replaced with a museum—the Museum of Dwellings. While searching the museum, they encounter a young boy named Ezra, who claims his brother is Julian, a giant eagle. Ezra tells them the Doctor now lives in the Forest, and offers to fly them using Julian. The group accepts and after traveling through the strange illusory forest, lands in the woods. As Conway's condition worsens, Shannon helps him continue, and finally locates Doctor Truman's house. Doctor Truman tells Conway his injury is severe but treatable, and prescribes him an anesthetic called Neurypnol TM. Act II ends as Conway succumbs to the drug, causing his vision to grow black and the walls of the house to pull away to reveal the forest beyond.
Act III
The Act opens with Conway dreaming of a previous conversation he had with Lysette. The two recall a tragic event involving Charlie, Lysette's son, and Lysette informs Conway of a new delivery to be made, which will be the last delivery of Lysette's antique shop. Conway awakens from the Neurypnol TM-induced sleep at Doctor Truman's house to find his injured leg replaced with a strange skeletal limb giving off a yellow glow. After returning to the Museum of Dwellings to find it closed for the night, Conway, Shannon and Ezra resume their search for Lula Chamberlain in Conway's truck. The three are quickly stopped again, however, after the truck's engine breaks down. While Shannon calls for a tow truck, two musicians, Johnny and Junebug, pass the group on a motorcycle with a sidecar, and after some discussion decide to help the group get the truck moving again in exchange for following them to the Lower Depths bar to watch their performance. The group agrees, traveling to the Lower Depths and talking with Harry, the bartender, who gives directions back to Route Zero. After their performance, Johnny and Junebug decide to accompany Conway, Shannon and Ezra on their travels.
Upon returning to Route Zero, the group comes across a large cave dominated by a rock spire, known as the Hall of the Mountain King. There they find various types of vintage electronics in many states of disrepair, including a large amount set on fire. They come across an old man named Donald who appears fixated on a grand computer project involving a "mold computer" that is enhanced by black mold growing inside it, as well as a piece of software designed as a comprehensive simulation called Xanadu. Donald claims that Lula was one of the people who designed Xanadu along with him, and that she had left a long time ago, but that there may be a way to find her using Xanadu. However, as Xanadu is not working correctly after an apparent sabotage from creatures Donald calls the Strangers, the group must travel to the Place Where the Strangers Come From in order to seek out their help. Conway and Shannon talk to the Strangers off-screen and, after returning with the solution, travel back with the group to the Hall of the Mountain King and fix Xanadu, using it to locate Lula. With Donald's help, she finds directions to Dogwood Drive, and tells the group to meet her at the Bureau of Reclaimed Spaces.
After arriving at the Bureau, Conway receives Lula's directions, which involve taking a ferry from the Bureau down a river. While waiting for the ferry, Conway reveals what happened while he was talking with the Strangers - he and Shannon had gone via a hidden elevator to an underground whiskey distillery staffed by odd, indistinct glowing skeletons, identical in appearance to Conway's new leg. While at the factory, Conway is mistaken for a new hire as a shipping truck driver and coerced into taking a drink of a very expensive whiskey, and is subsequently roped into a job driving trucks for the distillery to pay it off. Act III ends with the ferry arriving.
Act IV
Act IV takes place on and around The Mucky Mammoth tugboat ferry, as it goes around the underground river known as the Echo. Conway, Shannon, Ezra, Johnny, and Junebug continue downriver on the Echo with boat captain Cate, her assistant Will, and a passenger named Clara who plays the theremin. They make several short stops along the way - at a floating refueling station, a tiki bar called the Rum Colony, a waterside telephone booth, a psychological research facility called the Radvansky Center, and a cypress-covered island rich with edible mushrooms. Cate needs to deliver a package to a telephone exchange located in a flooded train tunnel, but the tugboat cannot pass through the area without disturbing a bat sanctuary, so Conway and Shannon agree to take a dinghy to reach the exchange station. They pass by a monument to the Elkhorn Mine disaster and then through the bat sanctuary. Conway, who has been drinking and whose behavior had become increasingly erratic, sees a boat full of glowing skeletons, similar to those at the whiskey distillery, and remarks that he's been seeing them repeatedly on the river journey. He tells Shannon he wants to take the job at the distillery and pass his delivery truck on to her; she is disturbed by his plan. Shannon delivers the package to Poppy, the lone remaining exchange operator, and when she turns around to re-board the dinghy, she finds that Conway has turned completely into a skeleton, and has boarded a skiff with two other skeletons, departing with them.
Shannon continues down the river alone on the dinghy and rendezvouses with the rest of the Mucky Mammoth passengers and crew at Sam & Ida's, a seafood restaurant. They eat and converse with the proprietors, then travel to a neighborhood of houseboats, where Clara gives a performance on theremin. Their last stop is the Silo of Late Reflections, where Shannon, Clara, Johnny, Junebug, and Ezra all disembark and unload Conway's truck. There is no clear path to get the truck from the Silo up to the surface; nevertheless, Shannon resolves to continue trying to make Conway's delivery to Dogwood Drive.
Act V
Act V begins after Shannon and her traveling companions have hauled all of the contents of Conway's truck to the top of the Silo of Late Reflections, which turns out to be a well at the center of an effigy mound in a small town. This town is the location of 5 Dogwood Drive – the house stands new and pristine, though the remaining the town was washed out by a flash flood that occurred while the characters were underground on Route Zero and the Echo River. The travelers meet and converse with the residents of this town, and learn about its history and landmarks, including a graveyard, a library, a waffle restaurant, a hangar and airstrip, and a public-access television station. Both the travelers and the residents weigh whether they will try to stay and rebuild the community, or leave in hopes of better lives. One of the residents, Ron, digs a grave to bury "The Neighbors", two horses that were fixtures of town life and who died in the flood. An impromptu ceremony is held in honor of the horses; town resident Nikki reads a poem, and Emily sings a song, "I'm Going That Way". The final view is of Shannon and the group having moved the items from Conway's truck into 5 Dogwood Drive, completing his delivery, and gathering in the house.
Development
In the early stages of development the developers were influenced by the works of Gabriel García Márquez, Flannery O'Connor and David Lynch. They also looked at theatre scripts for inspiration, which later helped in characterisation, dialogue, environment design and treatment of space, lighting and movement. It was developed using the Unity game engine.
The game was originally released for Linux, Microsoft Windows, OS X, with console ports for the Nintendo Switch, PlayStation 4, Xbox One, published by Annapurna Interactive under the subtitle of "TV Edition", coinciding with the release of the final act. The final act's update also included short interludes created during development, as well as audio captions, adjustable text scale, Steam achievements and localizations for French, Italian, German, Spanish, Russian, Korean and Japanese.
Reception
Kentucky Route Zero has received positive reviews from critics. GameSpot referred to it as being "beautiful and mysterious enough to grip you", and IGN called it "a damn fine example of what makes the medium of video games so special". PC Gamer stated that "Other adventures see you decide a character's fate, their successes or failures. Kentucky Route Zero makes a point of asking you to describe their interior instead – and, by extension, yourself as well ... A powerfully evocative and beautiful subversion of point-and-click rote, but occasionally opaque and disorienting." The A.V. Club noted that "KRZ really is the masterpiece critics have been lauding it as for years" and that "anyone with an interest in storytelling, existential mysteries, and the way art can reflect our poor and hollowed world should play it".
Rock, Paper, Shotgun named Kentucky Route Zero game of the year in 2013. The game has been included on several best-of-the-2010s lists.
It was nominated for Games For Impact at The Game Awards 2020 and for the Nebula Award for Best Game Writing.
In March 2021, Kentucky Route Zero: TV Edition won the BAFTA Games Award for Original Property, and was nominated for the Narrative award.
References
Further reading
External links
2020 video games
Crowdfunded video games
Indie video games
Kickstarter-funded video games
Linux games
MacOS games
Nintendo Switch games
PlayStation 4 games
Point-and-click adventure games
Single-player video games
Video games about dogs
Video games developed in the United States
Video games set in Kentucky
Windows games
Xbox One games
Annapurna Interactive games |
48384890 | https://en.wikipedia.org/wiki/Presto%20%28SQL%20query%20engine%29 | Presto (SQL query engine) | Presto (lincluding PrestoDB and PresoSQL later re-branded to Trino) is a distributed query engine for big data using the SQL query language. Its architecture allows users to query data sources such as Hadoop, Cassandra, Kafka, AWS S3, Alluxio, MySQL, MongoDB and Teradata. One can even query data from multiple data sources within a single query. Presto is community driven open-source software released under the Apache License.
History
Presto was originally designed and developed at Facebook, Inc. (later renamed Meta) for their data analysts to run interactive queries on its large data warehouse in Apache Hadoop. The first four developers were Martin Traverso, Dain Sundstrom, David Phillips, and Eric Hwang.
Before Presto, the data analysts at Facebook relied on Apache Hive for running SQL analytics on their multi-petabyte data warehouse.
Hive was deemed too slow for Facebook's scale and Presto was invented to fill the gap to run fast queries. Original development started in 2012 and deployed at Facebook later that year. In November 2013, Facebook announced its open source release.
In 2014, Netflix disclosed they used Presto on 10 petabytes of data stored in the Amazon Simple Storage Service (S3).
In November, 2016, Amazon announced a service called Athena that was based on Presto.
In 2017, Teradata spun out a company called Starburst Data to commercially support Presto, which included staff acquired from Hadapt in 2014.
Teradata's QueryGrid software allowed Presto to access a Teradata relational database.
In January 2019, the Presto Software Foundation was announced. The foundation is a not-for-profit organization for the advancement of the Presto open source distributed SQL query engine. At the same time, Presto development forked: PrestoDB maintained by Facebook and PrestoSQL maintained by the Presto Software Foundation with some cross pollination of code.
In September 2019, Facebook donated PrestoDB to the Linux Foundation, establishing the Presto Foundation. Neither the creators of Presto, nor the top contributors and committers, were invited to join this foundation.
By 2020, all four of the original Presto developers had joined Starburst.
In December 2020, PrestoSQL was rebranded as Trino, since Facebook had a trademark on the name "Presto" (also donated to the Linux Foundation).
Another company called Ahana was announced in 2020, with seed funding from GV (formerly Google Ventures, an arm of Alphabet, Inc.), to commercialize the PrestoDB fork as a cloud service.
A $20 million round of funding for Ahana was announced in August 2021.
Architecture
Presto’s architecture is very similar to other database management systems using cluster computing, sometimes called massively parallel processing (MPP). One coordinator works in sync with multiple workers. Clients submit SQL statements that get parsed and planned following which parallel tasks are scheduled to workers. Workers jointly process rows from the data sources and produce results that are returned to the client. Compared to the original Apache Hive execution model which used the Hadoop MapReduce mechanism on each query, Presto does not write intermediate results to disk, resulting in a significant speed improvement. Presto is written in Java.
A single Presto query can combine data from multiple sources. Presto offers connectors to data sources including files in Alluxio, Hadoop Distributed File System (often called a data lake), Amazon S3, MySQL, PostgreSQL, Microsoft SQL Server, Amazon Redshift, Apache Kudu, Apache Phoenix, Apache Kafka, Apache Cassandra, Apache Accumulo, MongoDB and Redis. Unlike other Hadoop distribution-specific tools, such as Apache Impala, Presto can work with any variant of Hadoop or without it. Presto supports separation of compute and storage and may be deployed on premises or using cloud computing.
See also
Apache Drill
Big data
Data-intensive computing
References
External links
Trino Software Foundation (formerly Presto Software Foundation)
Presto Foundation (under the Linux Foundation)
SQL
Free system software
Hadoop
Cloud platforms
Facebook software |
60278804 | https://en.wikipedia.org/wiki/Euphemus%20%28mythology%29 | Euphemus (mythology) | In Greek mythology, Euphemus (; Ancient Greek: Εὔφημος Eὔphēmos, "reputable") was the name of several distinct characters:
Euphemus, son of Poseidon and an Argonaut.
Euphemus, a descendant of the river god Axius and the father of the hero Eurybarus who defeated the female monster Sybaris.
Euphemus, father of Daedalus by Hyginus, possibly by mistake instead of Eupalamus.
Euphemus, son of Troezenus and a leader of the Thracian Cicones. He was an ally of the Trojans. According to late writers, he was killed either by Achilles or by one of the following four: Diomedes, Idomeneus and the two Ajaxes who at one point united to attack the opponents.
Euphemus, surname of Zeus on Lesbos.
Notes
References
Antoninus Liberalis, The Metamorphoses of Antoninus Liberalis translated by Francis Celoria (Routledge 1992). Online version at the Topos Text Project.
Apollonius Rhodius, Argonautica translated by Robert Cooper Seaton (1853-1915), R. C. Loeb Classical Library Volume 001. London, William Heinemann Ltd, 1912. Online version at the Topos Text Project.
Apollonius Rhodius, Argonautica. George W. Mooney. London. Longmans, Green. 1912. Greek text available at the Perseus Digital Library.
Dares Phrygius, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at theio.com
Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project.
Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project.
Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. Online version at the Perseus Digital Library.
Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. Greek text available at the Perseus Digital Library.
Children of Poseidon
Argonauts
People of the Trojan War
Characters in Greek mythology |
8180054 | https://en.wikipedia.org/wiki/Cabalum%20Western%20College | Cabalum Western College | Cabalum Western College is a non-sectarian, stock institution of higher learning in Iloilo City, Iloilo, Philippines established by Dr. Jose Cabalum Sr.
History
On October 5, 1945, immediately after World War II, the National Business School was established by Dr. Jose Cabalum Sr. He felt a calling from God to help the country rehabilitate by helping the young prepare themselves for their future with a proper education. The National Business School started with a single typewriter, a couple of tables and a few chairs. The founder was helped by two associates, Ceferino Lañada and Urbano Garrido. The college offered courses in Stenography, Typewriting and Bookkeeping. It was the first school to acquire permit on its courses, considering all vocational schools in Iloilo City.
In July 1949, the administration decided to offer secondary commercial course in line with its extension program. Later, the name National Business School was changed to Cabalum Commercial School, due to the advice of Bureau of Private School. In 1962, the school got third place in the National Examination for Public and Private Schools. In 1963, the Secondary Commercial Course was abolished to specialize in Collegiate Secretarial and Business Education. In January 1982, Cabalum Commercial School applied with the Ministry of Education, Culture and Sports for a permit to offer complete four-year degree program in Bachelor of Science in Secretarial Administration (BSSA). The permit was issued in October 1984 and became effective in March 1985. On the same year, Cabalum Commercial School was awarded as the "Outstanding Pioneer Business School" by Iloilo Chamber of Commerce and Industry, for having served and answered the need for vocational training among the youth for many years.
The college has been offering complete program on Bachelor of Science in Business Administration (BSBA) and Bachelor of Science in Secretarial Administration (BSSA) and with additional vocational courses. The request for change of name was granted, and the corporate name was changed to Cabalum Western College Inc. To keep abreast with the demand of technical advancement and development, Cabalum Western College offered Bachelor of Science in Computer Science (BSCS) starting school year 1992. Likewise, it offered Bachelor of Science in Secretarial Administration course with majors in Science and Data Processing under Bachelor of Science in Business Administration course. Since 1996, the college has been offering a five-year program in the Bachelor of Science in Computer Engineering (BSCoE).
Ideals
Cabalum Western College is an academic institution that had encountered significant transition since its humble beginning and the college continues to exist embodying the three ideals: scholarship, character, and service.
Present times
Cabalum Western College is located exactly at the heart of the city, along the Dr. Fermin Caram Sr., Avenue in Iloilo City. It has a total land area of 1,812sq. m., with three buildings, 31 classrooms, and air-conditioned laboratories with tools and equipment needed by the students. Another room was under renovation for the opening of new course, Bachelor of Science in Hotel and Restaurant Management this school year.
On October 2–5, 2006, the Cabalum Western College celebrated its 61st Foundation Day with the theme "The Founder's Legacy." The week of activities was dedicated to Dr. Jose Cabalum Sr., who died on August 18, 2006. It showcased talents, creativity and intelligence of the students and their teachers.
Cabalum Western College is affiliated with PSITE, CEAS-WV, AUCSI, PACSB, PAEOA and PACU. The Commission on Higher Education (CHED) and Technical Education and Skills Development Authority (TESDA) also recognize the Cabalum Western College as it offers bachelor's degree and Technical/Vocational courses.
The college trains students intellectually, physically, emotionally and spiritually. The students were active members of Red Cross Iloilo Chapter, Campus Paper Writers of Region VI-Western Visayas and Campus Ministry Iloilo Chapter. Cabalum Western College allows its students to have on-the-job training with companies in Western Visayas with work related to their field. Every year, the graduating students of Computer Engineering and Computer Science go to Manila for their education tour and training at more sophisticated IT companies where they can have hands-on practice. The undergrad students have their Annual Seminar-Workshop at college with experts and experienced guest speakers.
Cabalum Western College is concerned with the intellectual, social, moral, physical and spiritual needs of the individual through scholarship, the development of character, and the promotion of service among its students. It strives to turn out graduates who are responsible, God-fearing citizens, proud of their culture, law-abiding, and prepared to do their share in national and international development.
The college, the administration, the faculty, and staff is continually improving to give the students the best and the most effective way of teaching.
Flagship courses
At first, Cabalum Western College offered courses in Stenography, Typewriting and Bookkeeping. But due to the overwhelming demand for IT (Information Technology) specialists, the college now offers different computer-based bachelor's degree and Technical/Vocational courses. This is the only college in town which offers ladderized courses. Graduating students were required to render On-the-job training (OJT) with the different top IT Companies in the region related to their courses.
Bachelor of Science in Computer Engineering (BSCoE) trained the students to be innovative in the future. Computer-Aided Design and Hardware-Software interfacing was part of their trainings to prepare them to be one of the top inventors of the world. The graduating students need to pass their oral and written defense with the invited panelist before they can graduate. A latest trend of interfacing was done particularly in the field of robotics.
Two-year Computer Technician Course is part of the ladder of Computer Engineering course. This year the students were trained to understand how electricity and electronics work.
Bachelor of Science in Computer Science (BSCS) trains the students on the software world. Every now and then, new software is introduced and the students are abruptly taught and specifically apply it in their work as they graduate.
Bachelor of Science in Business Administration (BSBA) with majors in Management, Finance and Management Accounting prepare students to be part of the corporate world.
Two-Year Associate in Commercial Science Course is the first two years of the BSBA. Students are trained to become competent business associates in the future.
Bachelor of Science in Office Administration (BSOA) with a specialty in the field of Computer Teacher Education tailor the students to become competent computer-teachers with passion and compassion to their students and their work. The ladder of this course is the Two-Year Computer Secretarial Course. The course trains the students to become a competent secretary to fit-in in whatever field of work they will be connected.
The college also offers 5-Months Vocational Courses such as Dressmaking, Hairscience, Advance Dressmaking, Beauty Culture, Master's Men Tailoring, and Pre-Master's Men Tailoring. Together with some Special Vocational Courses in Stenography, Typewriting and Bookkeeping.
School activities
Every first week of October, the Cabalum Western College celebrates her Foundation Week for two main purposes: to let her students remember her humble beginnings and to develop their intra-personal and interpersonal capabilities. The event does not only display intelligence but also creativity, sportsmanship, and camaraderie. Each year, the college administration, the faculty and staff together with the student's body designed a new program to challenge everyone and to avoid stagnancy of the events.
Aside from ball games, odyssey of the minds and clashes of beauties and wit, the college decided to have a Facet of the Months, which make the students and their teachers to let their sweat of creativity out to have a remarkable presentation.
The college also allows her students to explore the outside world from campus. The graduating students go to Manila as part of their Educational Tour to subject them with the latest trends of Information Technology.
Basically, Cabalum Western College caters students to become more sociable, more humane and competent enough to join the work force of the world.
References
, Manila Bulletin, Philippines, added 18-June-2007
, IT Information, added 18-June-2007
Universities and colleges in Iloilo City |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.