id
stringlengths 3
8
| url
stringlengths 32
207
| title
stringlengths 1
114
| text
stringlengths 93
492k
|
---|---|---|---|
52084532
|
https://en.wikipedia.org/wiki/Linux.Darlloz
|
Linux.Darlloz
|
Linux.Darlloz is a worm which infects Linux embedded systems.
Linux.Darlloz was first discovered by Symantec in 2013.
Linux.Darlloz targets the Internet of things and infects routers, security cameras, set-top boxes by exploiting a PHP vulnerability.
The worm was based on a Proof of concept code that was released in October 2013.
Linux.Darlloz utilizes vulnerability () to exploit systems in order to compromise systems.
Linux.Darlloz was later found in March 2014 to have started mining crypto currencies such as Mincoin and Dogecoin.
See also
Botnet
Mirai (malware)
BASHLITE
Remaiten
Linux.Wifatch
Hajime (malware)
References
IoT malware
Linux malware
Botnets
|
19903569
|
https://en.wikipedia.org/wiki/OpenVRML
|
OpenVRML
|
OpenVRML is a free and open-source software project that makes it possible to view three-dimensional objects in the VRML and X3D formats in Internet-based applications. The software was initially developed by Chris Morley; since 2000 the project has been led by Braden McDaniel.
OpenVRML provides a GTK+-based plugin to render VRML and X3D worlds in web browsers. Its libraries can be used to add VRML and X3D support to applications. The software is licensed under the terms of the GNU Lesser General Public License (LGPL) and distributed a GNU-style source package that is portable to most POSIX systems with a C++ compiler. The source distribution also includes project files for building on Microsoft Windows with the freely-available Visual C++ Express compiler.
Binary (compiled) versions of the software are available within the Linux distributions Fedora and Debian, as well as under FreshPorts for FreeBSD and Fink for Mac OS X.
A number of software applications are designed to generate VRML code; see for instance GNU Octave.
References
External links
Official website
http://sourceforge.net/projects/openvrml/
Free software programmed in C++
Free 3D graphics software
|
29822312
|
https://en.wikipedia.org/wiki/MediaInfo
|
MediaInfo
|
MediaInfo is a free, cross-platform and open-source program that displays technical information about media files, as well as tag information for many audio and video files. It is used in many programs such as XMedia Recode, MediaCoder, eMule, and K-Lite Codec Pack. It can be easily integrated into any program using a supplied . MediaInfo supports popular video formats (e.g. Matroska, WebM, AVI, WMV, QuickTime, Real, DivX, XviD) as well as lesser known or emerging formats. In 2012 MediaInfo 0.7.57 was also distributed in the PortableApps format.
MediaInfo provides a command-line interface for displaying the provided information on all supported platforms. Additionally, a GUI for viewing the information on Microsoft Windows and macOS is provided.
Technical information
MediaInfo reveals information such as:
General: Title, author, director, album, track number, date, duration
Video: codec, aspect ratio, framerate, bitrate
Audio: codec, sample rate, channels, language, bitrate
Text: subtitle language
Chapters: numbers of chapters, list of chapters
MediaInfo 0.7.51 and newer retrieve codec information optionally from tags or by computation. Thus in the case of misleading tags erroneous codec information may be presented.
MediaInfo installer was previously bundled with "OpenCandy". However, you were able continue the installation process without installing it. This is no more the case since April 2016.
Supported input formats
MediaInfo supports just about any video and audio file including:
Video: MXF, MKV, OGM, AVI, DivX, WMV, QuickTime, RealVideo, Mpeg-1, MPEG-2, MPEG-4, DVD-Video (VOB), DivX, XviD, MSMPEG4, ASP, H.264 (Mpeg-4 AVC)
Audio: OGG, MP3, WAV, RealAudio, AC3, DTS, AAC, M4A, AU, AIFF, Opus
Subtitles: SRT, SSA, ASS, SAMI
Supported operating systems
MediaInfo supports Microsoft Windows XP or later, macOS, Android, iOS (iPhone / iPad) Solaris and many Linux and BSD distributions. MediaInfo also provides source code so essentially any operating system or platform can be supported. An old version 0.7.60 for Windows 95 to 2000 exists.
There is a Doom9 thread for MediaInfo developers also covering simplified and modified
implementations.
Licensing
Up to version 0.7.62 the MediaInfo library was licensed under the GNU Lesser General Public License, while GUI and CLI were provided under the terms of the GNU General Public License. Starting with version 0.7.63 the project switched to a BSD 2-clause license ("Simplified BSD License").
See also
GSpot Codec Information Appliance
FFmpeg command line tool ffprobe
ExifTool
References
External links
Cross-platform free software
Free multilingual software
Free multimedia software
Free software programmed in C++
Metadata
Software that uses wxWidgets
Software using the BSD license
|
54016646
|
https://en.wikipedia.org/wiki/Makeblock
|
Makeblock
|
Makeblock is a private Chinese technology company headquartered in Shenzhen, China, that develops Arduino-based hardware, robotics hardware, and Scratch-based software, for the purpose of providing educational tools for learning. This includes programming, engineering, and mathematics through the use of robotics.
Makeblock's products are sold in more than 140 countries and have over 10 million users in 20,000 schools worldwide. Roughly 70 percent of Makeblock's sales occur outside of China, with the United States being the largest market.
Founder & CEO
Founded in 1985 in Anhui, China, Jasen Wang (Wang Jianjun - 王建军) says that he grew up as an "ordinary, poor child". He earned his master's degree in Aircraft Design at Northwestern Polytechnical University in 2010, while tinkering with robotics on the side. Wang spent a year in the workforce before founding Makeblock in 2012.
Wang remains a product manager at the company. In 2013, Forbes China ranked Wang as one of the top 30 entrepreneurs under the age of 30.
After founding the Makeblock brand in March 2012, $23,000 was raised in a round of funding from HAX. The company received international coverage when it launched a robotics construction platform called Makeblock during December of the same year.
History
2013
Makeblock launched a crowdfunding project on Kickstarter, becoming the first ever Chinese entity to do so.
2014
Makeblock began mBlock, officially entering the educational market in February.
2015
The first launch of mBot and mDrawbot occurred in April. By December, Makeblock's products had been sold in over 80 countries, and the brand had partnered with over 1,000 educational institutions.
2016
The first launch of mBot Ranger took place in March. In May, Makeblock became the exclusive robotics building platform of the RoPorter competition at The Washington Post's Transformers event.
The first real experience store opened in Shenzhen in June, marking the company's first entry into the consumer mass market. At this stage, Makeblock products had been sold in over 140 countries and utilized in more than 20,000 schools.
Airblock was launched in October, followed by the release of mBlock in November.
2017
Makeblock Neuron was launched in March 2017. Shortly afterwards, the product won an array of internationally recognized awards including the German Red Dot, American ISDA IDEA, Good Design Award (Japan) and the South Korean K-Design Award.
MakeX - a Chinese national robotics challenge for teenagers was launched in May.
In July, a partnership with SoftBank Group heralded an official entry into the Japanese market. This was followed by the set up of subsidiaries in the U.S., Europe, Hong Kong and Japan in August.
Codey Rocky was released in November. By December, the number of global Makeblock users surpassed 4.5 million.
2018
Makeblock raised $44 million (USD) in Series C round with a $367 million valuation. The round was led by the China International Capital Corporation (CICC) Alpha, a subsidiary of the CICC direct investment platform.
Hardware
1.STEAM Kits
1.1 Codey Rocky
Codey Rocky is a robot aimed to helping children learn the basics of coding and AI technologies.
It is composed of two detachable parts. Codey is a programmable controller holding more than 10 electronic modules. Rocky is a vehicle that can transport Codey. It can avoid obstacles, recognize colors and follow lines.
Codey Rocky is programmable with mBlock 5 and with its use, users can better understand Internet of things (IoT) technologies.
1.2 Makeblock Neuron
Makeblock Neuron is a programmable platform of more than 30 electronic building blocks. This product is targeted towards children and has color-coded blocks aimed at easier understanding.
Each of the blocks has various built-in features and can interact with each other. The kit also has IoT capability.
1.3 Airblock
A winner of four international design awards, the Airblock is a seven-module programmable flying robot. Magnetic connectors allow the drone to be assembled in different ways. It can be controlled using Makeblock's app.
1.4 mBot Series
mBot
Entry-level educational robotic kits
mBot is a STEAM education robot for beginners. It is a teaching and learning robot designed to teach programming. Children can build a robot from scratch and learn about a variety of robotic machinery and electronic parts. It also teaches the fundamentals of block-based programming, and helps children to develop their logical thinking and design skills.
mBot Ranger
Multiform land explorer
Part of the mBot series, the Ranger is aimed at users aged eight and up. This robot kit consists of three pre-set construction forms which can be expanded with ten expansion interfaces.
Ultimate 2.0
10-in-1 programmable robot kit
The most complex robot of the mBot series is aimed at users aged 12 and up. It includes an assembly guide of 10 designs that can be customized and adjusted. The kit contains more than 160 mechanical parts and modules, including Makeblock's MegaPi mainboard and is compatible with Arduino and Raspberry Pi. Along with Makeblock's block-based programming, Arduino IDE, Node.js and Python languages are supported.
There are also add-on packs.
1.5 mTiny
mTiny is an early education robot for children. Its Tap Pen Controller is a coding tool that exercises children's logical thinking and problem-solving abilities. It brings computer programming into children's lives, using coding cards and various themed map blocks to guide the child in exploring, perceiving, and creating through interactive, stimulating, games. The continuously updated mTiny toolkit also fosters children's interest in learning math, English, music, and other subjects.
1.6 Makeblock Halocode
Wireless single-board computer
Makeblock Halocode is a single board computer with built-in Wi-Fi. Designed for programming education, its design integrates a broad selection of electronic modules. Pairing with block-based programming software mBlock, Halocode offers opportunities to experience AI & IoT applications.
2. STEAM pro
2.1 Laserbox
Designed for education and creation, Laserbox re-imagines and redefines laser operating performance by the use of a high-resolution, ultra-wide-angle camera together with an AI visual algorithm. The machine can auto-identify any official material and then set up the cutting-engraving parameters accordingly.
2.2 mBuild
mBuild is the new series of electronic modules. It includes over 60 types of modules, supports infinite combinations, and can be used offline without further programming. Supported by both mBlock and the Mu Python editor, mBuild can be used to create interesting projects for beginners through to professionals. It facilitates learning the basics of programming, developing advanced projects, teaching AI and IoT, joining robotics competitions and much more.
2.3 Makerspace
Makerspace is a programmable building block platform that encompasses electronic modules, structural parts, motors and actuators, and transmission and motion parts for gadget building. Teachers can get customized Makerspace proposals for specific curriculum needs. Coupled with Scratch or text-based coding in mBlock, Makerspace helps students participate in global robotic events like MakeX.
Software
1. mBlock 5
mBlock 5 is a block-based and text-based programming software based on Scratch 3.0. mBlock 5 allows users to program Makeblock robots, Arduino boards, and micro:bit. Using mBlock 5 without any hardware, users are able to code games and animations. The block-based code can be converted to Python code, be connected to IoT and supports AI-functionality such as face and voice recognition, as well as mood sensing. It supports various operating systems including macOS and Windows.
2. mBlock 3
mBlock 3 is a block-based programming software based on Scratch 2.0. It interacts with Makeblock controller boards and other Arduino-based hardware, allowing users to create interactive hardware applications. The block-based code can be converted to Arduino C and supports various operating systems including macOS, Windows, Linux, and Chromebook.
3. mBlock Blockly
mBlock Blockly allows users to learn about programming through courses designed as levels of a game. The visual programming language taught is specifically created for Makeblock's robots; the courses were designed by education professionals.
4. Neuron App
The Neuron App is a flow-based programming application with IoT support. It can control over 30 electronic modules.
MakeX Robotic Competition
MakeX is a robotics competition platform that promotes multidisciplinary learning within the fields of science and technology. It aims to promote STEAM education through Robotics Competition, STEAM Carnival, etc.
As the core activity of MakeX, the namesake MakeX Robotics Competition provides high-level competitions in the spirit of creativity, teamwork, fun, and sharing. It is committed to inspiring young people to learn Science (S), Technology (T), Engineering (E), Art (A) and Mathematics (M) and apply such knowledge in solving real-world problems.
STEAM Education
STEAM education is a learning movement that branched out of the STEM learning concept. Education professionals felt that STEM, on its own, missed critical attributes that are thought to be necessary for individuals to truly prosper in a rapidly changing modern society. STEAM encompasses the areas of Science and Technology, Engineering, the Arts, along with Mathematics and encourages a merge of these fields in an attempt to suit the learning style of every type of student.
Makeblock describes itself as a proponent of STEAM and focuses on providing hardware and software products that aim to allow students to engage in practical, hands-on learning rather than the traditional main focus on theoretical knowledge.
Market Trends
Robotics education was perceived as a major trend during 2017.
References
Companies based in Shenzhen
Chinese companies established in 2011
Technology companies of China
Privately held companies of China
Arduino
|
2347905
|
https://en.wikipedia.org/wiki/Stasinus
|
Stasinus
|
According to some ancient authorities, Stasinus () of Cyprus, a semi-legendary early Greek poet, was the author of the Cypria, in eleven books, one of the poems belonging to the Epic Cycle that narrated the War of Troy. According to Photius others ascribed it to Hegesias (or Hegesinus) of Salamis or elsewhere even to Homer himself, who was said to have written it on the occasion of his daughter's marriage to Stasinus. At Halicarnassus, according to an inscription found in 1995, local tradition ascribed it to a local poet, a "Kyprias" (Κυπρίας).
The Cypria, presupposing an acquaintance with the events of the Homeric poem, confined itself to what preceded the Iliad, and thus formed a kind of introduction. It contained an account of the Judgement of Paris, the rape of Helen, the abandonment of Philoctetes on the island of Lemnos, the landing of the Achaeans on the coast of Asia Minor, and the first engagement before Troy. It is possible that the "Trojan Battle Order" (the list of Trojans and their allies, Iliad 2.816-876, which formed an appendix to the "Catalogue of Ships") is abridged from that in the Cypria, which is known to have contained a list of the Trojan allies. Proclus, in his Chrestomathia, gave an outline of the poem (preserved in Photius, cod. 239). Plato puts quotes from Stasinus' works in the mouth of Socrates, in his dialogue Euthyphro.
Surviving fragments
Of Zeus, the author and creator of all these things,/ You will not tell: for where there is fear there is also reverence. - fragment cited by Socrates in the Euthyphro dialogue
References
Sources
F.G. Welcker, Der epische Cyclus, oder Die homerischen Dichter Bonn : E. Weber, 1849-65.
D.B. Monro, Homer's Odyssey, books XIII-XXIV Appendix to his edition of Odyssey, xiii–xxiv. (1901)
Thomas W Allen, "The Epic Cycle," in Classical Quarterly 2.1 (January 1908:54-64).
Cypriot poets
Ancient Cypriots
Early Greek epic poets
7th-century BC Greek people
7th-century BC poets
Year of birth unknown
Year of death unknown
Ancient Greek writers known only from secondary sources
|
5187945
|
https://en.wikipedia.org/wiki/Clip%20Studio%20Paint
|
Clip Studio Paint
|
Clip Studio Paint (previously marketed as Manga Studio in North America), informally known in Japan as , is a family of software applications developed by Japanese graphics software company Celsys. It is used for the digital creation of comics, general illustration, and 2D animation. The software is available in versions for macOS, Windows, iOS, iPadOS, Android, and Chrome OS.
The application is sold in editions with varying feature sets. The full-featured edition is a page-based, layered drawing program, with support for bitmap and vector art, text, imported 3D models, and frame-by-frame animation. It is designed for use with a stylus and a graphics tablet or tablet computer. It has drawing tools which emulate natural media such as pencils, ink pens, and brushes, as well as patterns and decorations. It is distinguished from similar programs by features designed for creating comics: tools for creating panel layouts, perspective rulers, sketching, inking, applying tones and textures, coloring, and creating word balloons and captions.
History
The original version of the program ran on macOS and Windows, and was released in Japan as "Comic Studio" in 2001. It was sold as "Manga Studio" in the Western market by E Frontier America until 2007, then by Smith Micro Software until 2017; after which is has been sold and supported by Celsys and Graphixly LLC.
Early versions of the program were designed for creating black and white art with only spot color (a typical format for Japanese manga), but version 4 – released in 2007 – introduced support for creating full-color art. In 2013 a redesigned version of the program was introduced, one based on Celsys' separate Comic Studio and Illust Studio applications. Sold in different markets as "Clip Studio Paint" version 1 or "Manga Studio" version 5, the new application featured new coloring and text-handling tools, and a new file system which stored the data for each page in a single file (extension .lip), rather than the multiple files used for each page by Manga Studio 4 and earlier. In 2015, Comic Studio and Illust Studio were discontinued.
In 2016, the name "Manga Studio" was deprecated, with the program sold in all markets as "Clip Studio Paint". The version released under this unified branding (build 1.5.4 of the redesigned application) also introduced a new file format (extension .clip) and frame-by-frame animation. In late 2017, Celsys took over direct support for the software worldwide, and ceased its relationship with Smith Micro. In July 2018, Celsys began a partnership with Graphixly for distribution in North America, South America, and Europe.
Clip Studio Paint for the Apple iPad was introduced in November 2017, and for the iPhone in December 2019. Clip Studio Paint for Samsung Galaxy tablets and smartphones was released in August 2020, with versions for other Android devices and Chromebooks released in December.
Editions
The application has been sold in various editions, with differing feature sets and prices.
Early versions were sold in Japan as: Mini with very limited features (bundled with graphics tablets), Debut with entry-level features, Pro as the standard edition, and EX as the full-feature edition.
E Frontier and Smith Micro only sold the Debut and EX editions of the original application; with the overhauled version 5, Smith Micro sold only the Pro and EX editions, as standard and advanced editions of the program.
Under the Clip Studio Paint branding, the application is available in three editions: Debut (only bundled with tablets), Pro (adds support for vector-based drawing, custom textures, and comics-focused features), and EX (adds support for multi-page documents, book exporting).
Companion programs include Clip Studio (for managing and sharing digital assets distributed through the Clip Studio web site, managing licenses, and getting updates and support) and Clip Studio Modeler (for setting up 3D materials to use in Clip Studio Paint).
The Windows and macOS versions of the software are sold with perpetual licenses, with the software distributed either from the developer's web site or on DVD. Regular updates for these have been distributed online, free of additional charge. The versions for iPhone, iPad, and Android-based devices are distributed through the corresponding app stores free of charge, but require an ongoing subscription – which includes cloud storage – for unrestricted use; without a subscription the tablet versions can be used for only a specified number of months, and the phone versions can be used for only 1 hour per day.
See also
RETAS
Adobe Photoshop
Corel Painter
Notes
References
External links
Manga Studio/Comic Studio
e frontier America, Inc. page: Manga Studio 3.0
CELSYS, Inc. Comic Studio page: 4.0 English, On-de-Manga, Comic Studio Aqua 1.0 for Mac OS 9/X, 1.5 Japan, 2.0 Japan, 3.0 Japan, 4.0 Japan
Smith Micro Software, Inc. Manga Studio page: 3.0 Debut, 3.0 EX, 4.0
Manga Studio/Clip Studio Paint
Smith Micro Software, Inc. page: Manga Studio 5, Clip Studio Paint
CELSYS,Inc. Clip Studio Paint page: ,
IOS software
MacOS graphics software
Windows graphics-related software
2001 software
Raster graphics editors
2D animation software
Proprietary cross-platform software
Graphics software
Android (operating system) software
|
64165943
|
https://en.wikipedia.org/wiki/Code%20of%20Honor%20%28novel%29
|
Code of Honor (novel)
|
Code of Honor (stylized as Tom Clancy Code of Honor, Tom Clancy: Code of Honor, or Tom Clancy's Code of Honour in the United Kingdom) is a techno-thriller novel, written by Marc Cameron and published on November 19, 2019. It is his third book in the Jack Ryan series.
Set in the Tom Clancy universe, President Ryan deals with the imprisonment of his friend and former CIA colleague, Jesuit priest Pat West, in Indonesia. Meanwhile, The Campus search for next-generation AI technology before the Chinese military can use it for nefarious purposes.
Plot summary
In Indonesia, software engineer Jeff Noonan secretly sells the computer program Calliope to Superhuman Games, an Indonesian gaming company. Soon after, he is lured into a honey trap by Wu Chad, an agent working for the cyber warfare division of the Chinese PLA. Along with his assassin Kans, Zhao blackmails him into giving them a copy of Calliope, intending to exploit its next-generation AI capabilities to hack into American military computer systems.
The next day, Noonan turns to Jesuit priest Pat West for help; however, Chao and Kang catch up to them. They murder the former and have the priest arrested by the Indonesian police for made-up charges of blasphemy against Islam. However, Father West, a former CIA officer, manages to send a private text message about his encounter with Noonan to his friend, U.S. President Jack Ryan.
Upon receiving the text message from Father West, President Ryan discreetly orders the Campus to investigate the priest’s text message. After pleading for the release of Father West to the Indonesian president to no avail, he decides to make a state visit to Indonesia. Meanwhile, the priest is speedily convicted of smuggling heroin and sentenced to death in the country’s Execution Island.
The Campus tracks down Noonan’s colleague Todd Ackerman, who is in hiding in New Zealand. However, they find him dead, murdered by PLA agents. After finding out about the sale, they proceed to Indonesia to break into the headquarters of Suparman Games and retrieve the purchased copy of Calliope. However, Suparman’s henchmen abduct Campus operative Domingo “Ding” Chavez.
In China, General Bai Min prepares FIRESHIP, a military operation that aims to utilize Calliope. The program is uploaded by a Chinese agent into an American communications company’s computer system. Afterwards, Chao and Kang attempt to assassinate Peter Li, an ex-Navy admiral working for the company, as well as his family. However, he fights back by killing Chao and another henchman, and flees with his pregnant wife and children. He later enlists his friend John Clark's help.
President Ryan receives intelligence on Chinese general Song Biming’s granddaughter, who is suffering from retinoblastoma and is about to be brought to the United States for a surgical operation. He reluctantly allows his wife Cathy Ryan to covertly make contact with the general, who is known to be at odds with General Bai. The operation is a success, and afterwards General Song discreetly passes information on FIRESHIP to Dr. Ryan.
In Indonesia, the Campus rescue Chavez from his captors and retrieve Calliope in the Suparman headquarters. However, they are separated when Suparman's henchmen chase them to an airfield. With Calliope in tow, Chavez and colleague Adara Sherman hijack an aircraft smuggling heroin, crashing into a nearby island. They eventually call for air support from , which eventually rescues them. Chavez then informs ship captain Jimmy Akana about Calliope.
Meanwhile, senator Michelle Chadwick finds herself blackmailed by her lover, PLA operative David Huang, into spying on her political rival President Ryan. She eventually informs President Ryan and joins him on his trip to Indonesia, where he tells the Indonesian president of China's plans. The latter releases Father West from prison, and General Bai and Huang are eventually arrested by their respective governments.
After embedding itself into several military communications systems, Calliope makes contact with an American F-35 stealth fighter in the middle of a training exercise in the Pacific Ocean. The application launches a cruise missile from the aircraft, steering it toward a waiting Chinese trawler. , having been informed of Calliope's capabilities by Akana, quickly detects the application and deletes it from its computer system. Navy SEALs then retake the trawler with the stolen missile.
In Chicago, Clark lures Kang and his men to an ambush. However, a wounded Kang escapes and boards a train for Los Angeles. Clark follows and tracks him down, killing him.
Characters
United States government
Jack Ryan: President of the United States
Mary Pat Foley: director of national intelligence
Arnold "Arnie" van Damm: President Ryan's chief of staff
Scott Adler: secretary of state
The Campus
Gerry Hendley: director of The Campus and Hendley Associates
John Clark: director of operations
Domingo "Ding" Chavez: assistant director of operations
Jack Ryan, Jr.: operations officer / senior analyst
Dominic "Dom" Caruso: operations officer
Adara Sherman: operations officer
Bartosz "Midas" Jankowski: operations officer
Gavin Biery: director of information technology
Other characters
United States
Dr. Caroline "Cathy" Ryan First Lady of the United States
Dr. Dan Berryhill: former medical school classmate of Dr. Ryan
Peter Li: retired admiral, United States Navy
Michelle Chadwick: United States senator
Indonesia
Gunawan "Gugun" Gumelar: president of Indonesia
Geoff Noonan: gaming software engineer
Suparman: owner, Suparman Games
China
Zhao Chengzhi: president of China
David Huang: Chinese operative
General Song Biming: PRC military officer
General Bai Min: PRC military officer
Major Chang: Bai's aide
Wu Chao: PLA major / operative, Central Military Commission
Kang: Chinese assassin
Tsai Zhan: Communist Party minder
Reception
Commercial
The book debuted at number six on the Combined Print and E-Book Fiction and Hardcover Fiction categories of the New York Times bestseller list for the week of December 7, 2019.
Critical
The book received positive reviews. Thriller novel reviewer The Real Book Spy praised it, saying: "Marc Cameron has outdone himself once again, delivering the kind of fast-paced, original, true-to-the-characters thriller that Clancy’s fans have long devoured." Publishers Weekly's verdict on the novel is that "The plot unreels smoothly as it always does with Cameron at the helm. Readers will look forward to the further adventures of Ryan father and son." In a mixed review, Kirkus Reviews pointed out that while "the story is fun, as the Clancy yarns always are...some backstories feel like filler necessary to reach 500 pages."
References
Political thriller novels
2019 American novels
American thriller novels
Techno-thriller novels
Ryanverse
Novels set in Indonesia
|
468313
|
https://en.wikipedia.org/wiki/Mask%20%28computing%29
|
Mask (computing)
|
In computer science, a mask or bitmask is data that is used for bitwise operations, particularly in a bit field. Using a mask, multiple bits in a byte, nibble, word, etc. can be set either on or off, or inverted from on to off (or vice versa) in a single bitwise operation. An additional use of masking involves predication in vector processing, where the bitmask is used to select which element operations in the vector are to be executed (mask bit is enabled) and which are not (mask bit is clear).
Common bitmask functions
Masking bits to 1
To turn certain bits on, the bitwise OR operation can be used, following the principle that Y OR 1 = 1 and Y OR 0 = Y. Therefore, to make sure a bit is on, OR can be used with a 1. To leave a bit unchanged, OR is used with a 0.
Example: Masking on the higher nibble (bits 4, 5, 6, 7) the lower nibble (bits 0, 1, 2, 3) unchanged.
10010101 10100101
OR 11110000 11110000
= 11110101 11110101
Masking bits to 0
More often in practice, bits are "masked off" (or masked to 0) than "masked on" (or masked to 1). When a bit is ANDed with a 0, the result is always 0, i.e. Y AND 0 = 0. To leave the other bits as they were originally, they can be ANDed with 1 as Y AND 1 = Y
Example: Masking off the higher nibble (bits 4, 5, 6, 7) the lower nibble (bits 0, 1, 2, 3) unchanged.
10010101 10100101
AND 00001111 00001111
= 00000101 00000101
Querying the status of a bit
It is possible to use bitmasks to easily check the state of individual bits regardless of the other bits. To do this, turning off all the other bits using the bitwise AND is done as discussed above and the value is compared with 0. If it is equal to 0, then the bit was off, but if the value is any other value, then the bit was on. What makes this convenient is that it is not necessary to figure out what the value actually is, just that it is not 0.
Example: Querying the status of the 4th bit
10011101 10010101
AND 00001000 00001000
= 00001000 00000000
Toggling bit values
So far the article has covered how to turn bits on and turn bits off, but not both at once. Sometimes it does not really matter what the value is, but it must be made the opposite of what it currently is. This can be achieved using the XOR (exclusive or) operation. XOR returns 1 if and only if an odd number of bits are 1. Therefore, if two corresponding bits are 1, the result will be a 0, but if only one of them is 1, the result will be 1. Therefore inversion of the values of bits is done by XORing them with a 1. If the original bit was 1, it returns 1 XOR 1 = 0. If the original bit was 0 it returns 0 XOR 1 = 1. Also note that XOR masking is bit-safe, meaning that it will not affect unmasked bits because Y XOR 0 = Y, just like an OR.
Example: Toggling bit values
10011101 10010101
XOR 00001111 11111111
= 10010010 01101010
To write arbitrary 1s and 0s to a subset of bits, first write 0s to that subset, then set the high bits:
register = (register & ~bitmask) | value;
Uses of bitmasks
Arguments to functions
In programming languages such as C, bit fields are a useful way to pass a set of named boolean arguments to a function. For example, in the graphics API OpenGL, there is a command, glClear() which clears the screen or other buffers. It can clear up to four buffers (the color, depth, accumulation, and stencil buffers), so the API authors could have had it take four arguments. But then a call to it would look like
glClear(1,1,0,0); // This is not how glClear actually works and would make for unstable code.
which is not very descriptive. Instead there are four defined field bits, GL_COLOR_BUFFER_BIT, GL_DEPTH_BUFFER_BIT, GL_ACCUM_BUFFER_BIT, and GL_STENCIL_BUFFER_BIT and glClear() is declared as
void glClear(GLbitfield bits);
Then a call to the function looks like this
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Internally, a function taking a bitfield like this can use binary and to extract the individual bits. For example, an implementation of glClear() might look like:
void glClear(GLbitfield bits) {
if ((bits & GL_COLOR_BUFFER_BIT) != 0) {
// Clear color buffer.
}
if ((bits & GL_DEPTH_BUFFER_BIT) != 0) {
// Clear depth buffer.
}
if ((bits & GL_ACCUM_BUFFER_BIT) != 0) {
// Clear accumulation buffer.
}
if ((bits & GL_STENCIL_BUFFER_BIT) != 0) {
// Clear stencil buffer.
}
}
The advantage to this approach is that function argument overhead is decreased. Since the minimum datum size is one byte, separating the options into separate arguments would be wasting seven bits per argument and would occupy more stack space. Instead, functions typically accept one or more 32-bit integers, with up to 32 option bits in each. While elegant, in the simplest implementation this solution is not type-safe. A GLbitfield is simply defined to be an unsigned int, so the compiler would allow a meaningless call to glClear(42) or even glClear(GL_POINTS). In C++ an alternative would be to create a class to encapsulate the set of arguments that glClear could accept and could be cleanly encapsulated in a library.
Inverse masks
Masks are used with IP addresses in IP ACLs (Access Control Lists) to specify what should be permitted and denied. To configure IP addresses on interfaces, masks start with 255 and have the large values on the left side: for example, IP address with a mask. Masks for IP ACLs are the reverse: for example, mask . This is sometimes called an inverse mask or a wildcard mask. When the value of the mask is broken down into binary (0s and 1s), the results determine which address bits are to be considered in processing the traffic. A 0-bit indicates that the address bit must be considered (exact match); a 1-bit in the mask is a "don't care". This table further explains the concept.
Mask example:
network address (traffic that is to be processed):
mask:
network address (binary): 11000000.00000000.00000010.00000000
mask (binary): 00000000.00000000.00000000.11111111
Based on the binary mask, it can be seen that the first three sets (octets) must match the given binary network address exactly (11000000.00000000.00000010). The last set of numbers is made of "don't cares" (.11111111). Therefore, all traffic that begins with "" matches, since the last octet is "don't care". Therefore, with this mask, network addresses through () are processed.
Subtract the normal mask from in order to determine the ACL inverse mask. In this example, the inverse mask is determined for network address with a normal mask of .
- (normal mask) = (inverse mask)
ACL equivalents
The source/source-wildcard of means "any".
The source/wildcard of is the same as "host "
Image masks
In computer graphics, when a given image is intended to be placed over a background, the transparent areas can be specified through a binary mask. This way, for each intended image there are actually two bitmaps: the actual image, in which the unused areas are given a pixel value with all bits set to 0s, and an additional mask, in which the correspondent image areas are given a pixel value of all bits set to 0s and the surrounding areas a value of all bits set to 1s. In the sample at right, black pixels have the all-zero bits and white pixels have the all-one bits.
At run time, to put the image on the screen over the background, the program first masks the screen pixel's bits with the image mask at the desired coordinates using the bitwise AND operation. This preserves the background pixels of the transparent areas while resets with zeros the bits of the pixels which will be obscured by the overlapped image.
Then, the program renders the image pixel's bits by combining them with the background pixel's bits using the bitwise OR operation. This way, the image pixels are appropriately placed while keeping the background surrounding pixels preserved. The result is a perfect compound of the image over the background.
This technique is used for painting pointing device cursors, in typical 2-D videogames for characters, bullets and so on (the sprites), for GUI icons, and for video titling and other image mixing applications.
Although related (due to being used for the same purposes), transparent colors and alpha channels are techniques which do not involve the image pixel mixage by binary masking.
Hash tables
To create a hashing function for a hash table, often a function is used that has a large domain. To create an index from the output of the function, a modulo can be taken to reduce the size of the domain to match the size of the array; however, it is often faster on many processors to restrict the size of the hash table to powers of two sizes and use a bitmask instead.
An example of both modulo and masking in C:
#include <stdint.h>
#include <string.h>
int main(void) {
const uint32_t NUM_BUCKETS = 0xFFFFFFFF; // 2^32 - 1
const uint32_t MAX_RECORDS = 1<<10; // 2^10
const uint32_t HASH_BITMASK = 0x3FF; // (2^10)-1
char **token_array = NULL;
// Handle memory allocation for token_array…
char token[] = "some hashable value";
uint32_t hashed_token = hash_function(token, strlen(token), NUM_BUCKETS);
// Using modulo
size_t index = hashed_token % MAX_RECORDS;
// OR
// Using bitmask
size_t index = hashed_token & HASH_BITMASK;
*(token_array+index) = token;
// Free the memory from token_array …
return 0;
}
See also
Affinity mask
Binary-coded decimal
Bit field
Bit manipulation
Bitwise operation
Subnetwork
Tagged pointer
umask
References
Binary arithmetic
Articles with example C code
|
1795571
|
https://en.wikipedia.org/wiki/Calling%20convention
|
Calling convention
|
In computer science, a calling convention is an implementation-level (low-level) scheme for how subroutines receive parameters from their caller and how they return a result. Differences in various implementations include where parameters, return values, return addresses and scope links are placed (registers, stack or memory etc.), and how the tasks of preparing for a function call and restoring the environment afterwards are divided between the caller and the callee.
Calling conventions may be related to a particular programming language's evaluation strategy, but most often are not considered part of it (or vice versa), as the evaluation strategy is usually defined on a higher abstraction level and seen as a part of the language rather than as a low-level implementation detail of a particular language's compiler.
Variations
Calling conventions may differ in:
Where parameters, return values and return addresses are placed (in registers, on the call stack, a mix of both, or in other memory structures)
For parameters passed in memory, the order in which actual arguments for formal parameters are passed (or the parts of a large or complex argument)
How a (possibly long or complex) return value is delivered from the callee back to the caller (on the stack, in a register, or within the heap)
How the task of setting up for and cleaning up after a function call is divided between the caller and the callee
Whether and how metadata describing the arguments is passed
Where the previous value of the frame pointer is stored, which is used to restore the frame pointer when the routine ends (in the stack frame, or in some register)
Where any static scope links for the routine's non-local data access are placed (typically at one or more positions in the stack frame, but sometimes in a general register, or, for some architectures, in special-purpose registers)
How local variables are allocated can sometimes also be part of the calling convention (when the caller allocates for the callee)
In some cases, differences also include the following:
Conventions on which registers may be directly used by the callee, without being preserved
Which registers are considered to be volatile and, if volatile, need not be restored by the callee
Many architectures only have one widely-used calling convention, often suggested by the architect. For RISCs including SPARC, MIPS, and RISC-V, registers names based on this calling convention are often used. For example, MIPS registers through have "ABI names" through , reflecting their use for parameter passing in the standard calling convention. (RISC CPUs have many equivalent general-purpose registers so there's typically no hardware reason for giving them names other than numbers.)
Although some programming languages may partially specify the calling sequence in the language specification, or in a pivotal implementation, different implementations of such languages (i.e. different compilers) may still use various calling conventions, and an implementation may offer a choice of more than one calling convention. Reasons for this are performance, frequent adaptation to the conventions of other popular languages, with or without technical reasons, and restrictions or conventions imposed by various "computing platforms".
Architectures
x86 (32-bit)
The x86 architecture is used with many different calling conventions. Due to the small number of architectural registers, and historical focus on simplicity and small code-size, many x86 calling conventions pass arguments on the stack. The return value (or a pointer to it) is returned in a register. Some conventions use registers for the first few parameters which may improve performance, especially for short and simple leaf-routines very frequently invoked (i.e. routines that do not call other routines).
Example call:
push EAX ; pass some register result
push dword [EBP+20] ; pass some memory variable (FASM/TASM syntax)
push 3 ; pass some constant
call calc ; the returned result is now in EAX
Typical callee structure: (some or all (except ret) of the instructions below may be optimized away in simple procedures). Some conventions leave the parameter space allocated, using plain instead of . In that case, the caller could in this example, or otherwise deal with the change to ESP.
calc:
push EBP ; save old frame pointer
mov EBP,ESP ; get new frame pointer
sub ESP,localsize ; reserve stack space for locals
.
. ; perform calculations, leave result in EAX
.
mov ESP,EBP ; free space for locals
pop EBP ; restore old frame pointer
ret paramsize ; free parameter space and return.
ARM (A32)
The standard 32-bit ARM calling convention allocates the 15 general-purpose registers as:
r15: Program counter (as per the instruction set specification).
r14: Link register. The BL instruction, used in a subroutine call, stores the return address in this register.
r13: Stack pointer. The Push/Pop instructions in "Thumb" operating mode use this register only.
r12: Intra-Procedure-call scratch register.
r4 to r11: Local variables.
r0 to r3: Argument values passed to a subroutine and results returned from a subroutine.
If the type of value returned is too large to fit in r0 to r3, or whose size cannot be determined statically at compile time, then the caller must allocate space for that value at run time, and pass a pointer to that space in r0.
Subroutines must preserve the contents of r4 to r11 and the stack pointer (perhaps by saving them to the stack in the function prologue, then using them as scratch space, then restoring them from the stack in the function epilogue). In particular, subroutines that call other subroutines must save the return address in the link register r14 to the stack before calling those other subroutines. However, such subroutines do not need to return that value to r14—they merely need to load that value into r15, the program counter, to return.
The ARM calling convention mandates using a full-descending stack.
This calling convention causes a "typical" ARM subroutine to:
In the prologue, push r4 to r11 to the stack, and push the return address in r14 to the stack (this can be done with a single STM instruction);
Copy any passed arguments (in r0 to r3) to the local scratch registers (r4 to r11);
Allocate other local variables to the remaining local scratch registers (r4 to r11);
Do calculations and call other subroutines as necessary using BL, assuming r0 to r3, r12 and r14 will not be preserved;
Put the result in r0;
In the epilogue, pull r4 to r11 from the stack, and pull the return address to the program counter r15. This can be done with a single LDM instruction.
ARM (A64)
The 64-bit ARM (AArch64) calling convention allocates the 31 general-purpose registers as:
x31 (SP): Stack pointer or a zero register, depending on context.
x30 (LR): Procedure link register, used to return from subroutines.
x29 (FP): Frame pointer.
x19 to x29: Callee-saved.
x18 (PR): Platform register. Used for some operating-system-specific special purpose, or an additional caller-saved register.
x16 (IP0) and x17 (IP1): Intra-Procedure-call scratch registers.
x9 to x15: Local variables, caller saved.
x8 (XR): Indirect return value address.
x0 to x7: Argument values passed to and results returned from a subroutine.
All registers starting with x have a corresponding 32-bit register prefixed with w. Thus, a 32-bit x0 is called w0.
Similarly, the 32 floating-point registers are allocated as:
v0 to v7: Argument values passed to and results returned from a subroutine.
v8 to v15: callee-saved, but only the bottom 64 bits need to be preserved.
v16 to v31: Local variables, caller saved.
PowerPC
The PowerPC architecture has a large number of registers so most functions can pass all arguments in registers for single level calls. Additional arguments are passed on the stack, and space for register-based arguments is also always allocated on the stack as a convenience to the called function in case multi-level calls are used (recursive or otherwise) and the registers must be saved. This is also of use in variadic functions, such as , where the function's arguments need to be accessed as an array. A single calling convention is used for all procedural languages.
MIPS
The O32 ABI is the most commonly-used ABI, owing to its status as the original System V ABI for MIPS. It is strictly stack-based, with only four registers available to pass arguments. This perceived slowness, along with an antique floating-point model with 16 registers only, has encouraged the proliferation of many other calling conventions. The ABI took shape in 1990 and was never updated since 1994. It is only defined for 32-bit MIPS, but GCC has created a 64-bit variation called O64.
For 64-bit, the N64 ABI (not related to Nintendo 64) by Silicon Graphics is most commonly used. The most important improvement is that eight registers are now available for argument passing; It also increases the number of floating-point registers to 32. There is also an ILP32 version called N32, which uses 32-bit pointers for smaller code, analogous to the x32 ABI. Both run under the 64-bit mode of the CPU.
A few attempts have been made to replace O32 with a 32-bit ABI that resembles N32 more. A 1995 conference came up with MIPS EABI, for which the 32-bit version was quite similar. EABI inspired MIPS Technologies to propose a more radical "NUBI" ABI that additionally reuses argument registers for the return value. MIPS EABI is supported by GCC but not LLVM; neither supports NUBI.
For all of O32 and N32/N64, the return address is stored in a register. This is automatically set with the use of the (jump and link) or (jump and link register) instructions. The stack grows downwards.
SPARC
The SPARC architecture, unlike most RISC architectures, is built on register windows. There are 24 accessible registers in each register window: 8 are the "in" registers (%i0-%i7), 8 are the "local" registers (%l0-%l7), and 8 are the "out" registers (%o0-%o7). The "in" registers are used to pass arguments to the function being called, and any additional arguments need to be pushed onto the stack. However, space is always allocated by the called function to handle a potential register window overflow, local variables, and (on 32-bit SPARC) returning a struct by value. To call a function, one places the arguments for the function to be called in the "out" registers; when the function is called, the "out" registers become the "in" registers and the called function accesses the arguments in its "in" registers. When the called function completes, it places the return value in the first "in" register, which becomes the first "out" register when the called function returns.
The System V ABI, which most modern Unix-like systems follow, passes the first six arguments in "in" registers %i0 through %i5, reserving %i6 for the frame pointer and %i7 for the return address.
IBM System/360 and successors
The IBM System/360 is another architecture without a hardware stack. The examples below illustrate the calling convention used by OS/360 and successors prior to the introduction of 64-bit z/Architecture; other operating systems for System/360 might have different calling conventions.
Calling program:
LA 1,ARGS Load argument list address
L 15,=A(SUB) Load subroutine address
BALR 14,15 Branch to called routine1
...
ARGS DC A(FIRST) Address of 1st argument
DC A(SECOND)
...
DC A(THIRD)+X'80000000' Last argument2
Called program:
SUB EQU * This is the entry point of the subprogram
Standard entry sequence:
USING *,153
STM 14,12,12(13) Save registers4
ST 13,SAVE+4 Save caller's savearea addr
LA 12,SAVE Chain saveareas
ST 12,8(13)
LR 13,12
...
Standard return sequence:
L 13,SAVE+45
LM 14,12,12(13)
L 15,RETVAL6
BR 14 Return to caller
SAVE DS 18F Savearea7
Notes:
The instruction stores the address of the next instruction (return address) in the register specified by the first argument—register 14—and branches to the second argument address in register 15.
The caller passes the address of a list of argument addresses in register 1. The last address has the high-order bit set to indicate the end of the list. This limits programs using this convention to 31-bit addressing.
The address of the called routine is in register 15. Normally this is loaded into another register and register 15 is not used as a base register.
The instruction saves registers 14, 15, and 0 through 12 in a 72-byte area provided by the caller called a save area pointed to by register 13. The called routine provides its own save area for use by subroutines it calls; the address of this area is normally kept in register 13 throughout the routine. The instructions following update forward and backward chains linking this save area to the caller's save area.
The return sequence restores the caller's registers.
Register 15 is usually used to pass a return value.
Declaring a savearea statically in the called routine makes it non-reentrant and non-recursive; a reentrant program uses a dynamic savearea, acquired either from the operating system and freed upon returning, or in storage passed by the calling program.
In the System/390 ABI and the z/Architecture ABI, used in Linux:
Registers 0 and 1 are volatile
Registers 2 and 3 are used for parameter passing and return values
Registers 4 and 5 are also used for parameter passing
Register 6 is used for parameter passing, and must be saved and restored by the callee
Registers 7 through 13 are for use by the callee, and must be saved and restored by them
Register 14 is used for the return address
Register 15 is used as the stack pointer
Floating-point registers 0 and 2 are used for parameter passing and return values
Floating-point registers 4 and 6 are for use by the callee, and must be saved and restored by them
In z/Architecture, floating-point registers 1, 3, 5, and 7 through 15 are for use by the callee
Access register 0 is reserved for system use
Access registers 1 through 15 are for use by the callee
SuperH
Note: "preserved" reserves to callee saving; same goes for "guaranteed".
68k
The most common calling convention for the Motorola 68000 series is:
d0, d1, a0 and a1 are scratch registers
All other registers are callee-saved
a6 is the frame pointer, which can be disabled by a compiler option
Parameters are pushed onto the stack, from right to left
Return value is stored in d0
IBM 1130
The IBM 1130 was a small 16-bit word-addressable machine. It had only six registers plus condition indicators, and no stack. The registers are Instruction Address Register (IAR), Accumulator (ACC), Accumulator Extension (EXT), and three index registers X1–X3. The calling program is responsible for saving ACC, EXT, X1, and X2. There are two pseudo-operations for calling subroutines, to code non-relocatable subroutines directly linked with the main program, and to call relocatable library subroutines through a transfer vector. Both pseudo-ops resolve to a Branch and Store IAR () machine instruction that stores the address of the next instruction at its effective address (EA) and branches to EA+1.
Arguments follow the usually these are one-word addresses of argumentsthe called routine must know how many arguments to expect so that it can skip over them on return. Alternatively, arguments can be passed in registers. Function routines returned the result in ACC for real arguments, or in a memory location referred to as the Real Number Pseudo-Accumulator (FAC). Arguments and the return address were addressed using an offset to the IAR value stored in the first location of the subroutine.
* 1130 subroutine example
ENT SUB Declare "SUB" an external entry point
SUB DC 0 Reserved word at entry point, conventionally coded "DC *-*"
* Subroutine code begins here
* If there were arguments the addresses can be loaded indirectly from the return addess
LDX I 1 SUB Load X1 with the address of the first argument (for example)
...
* Return sequence
LD RES Load integer result into ACC
* If no arguments were provided, indirect branch to the stored return address
B I SUB If no arguments were provided
END SUB
Subroutines in IBM 1130, CDC 6600 and PDP-8 (all three computers were introduced in 1965) store the return address in the first location of a subroutine.
Implementation considerations
This variability must be considered when combining modules written in multiple languages, or when calling operating system or library APIs from a language other than the one in which they are written; in these cases, special care must be taken to coordinate the calling conventions used by caller and callee. Even a program using a single programming language may use multiple calling conventions, either chosen by the compiler, for code optimization, or specified by the programmer.
Threaded code
Threaded code places all the responsibility for setting up for and cleaning up after a function call on the called code. The calling code does nothing but list the subroutines to be called. This puts all the function setup and clean-up code in one place—the prologue and epilogue of the function—rather than in the many places that function is called. This makes threaded code the most compact calling convention.
Threaded code passes all arguments on the stack. All return values are returned on the stack. This makes naive implementations slower than calling conventions that keep more values in registers. However, threaded code implementations that cache several of the top stack values in registers—in particular, the return address—are usually faster than subroutine calling conventions that always push and pop the return address to the stack.
PL/I
The default calling convention for programs written in the PL/I language passes all arguments by reference, although other conventions may optionally be specified. The arguments are handled differently for different compilers and platforms, but typically the argument addresses are passed via an argument list in memory. A final, hidden, address may be passed pointing to an area to contain the return value. Because of the wide variety of data types supported by PL/I a data descriptor may also be passed to define, for example, the lengths of character or bit strings, the dimension and bounds of arrays (dope vectors), or the layout and contents of a data structure. Dummy arguments are created for arguments which are constants or which do not agree with the type of argument the called procedure expects.
See also
Application binary interface
Application programming interface
Comparison of application virtual machines
Continuation-passing style
Foreign function interface
Language binding
Name mangling
Spaghetti stack
SWIG
Tail call optimization
References
External links
Introduction to assembly on the PowerPC
Mac OS X ABI Function Call Guide
Procedure Call Standard for the ARM Architecture
Embedded Programming with the GNU Toolchain, Section 10. C Startup
Subroutines
|
11420401
|
https://en.wikipedia.org/wiki/Ciber
|
Ciber
|
Ciber Global, now a part of HTC Global Services, is a global information technology consulting, services and outsourcing company with commercial clients.
The company was founded in Detroit, Michigan, in 1974. The company was called the "Consultants in Business Engineering Research" (Ciber).
In May 2017, HTC Global Services acquired Ciber.
History
Founding
Ciber was founded in 1974 by three individuals, one of whom would remain with the company and guide its fortunes for its crucial first two decades. Of the three original founders of Ciber, Bobby G. Stevenson emerged as the key figure in Ciber's history, shaping a start-up computer consulting firm into a leading national force by the 1990s, when the computer consulting industry was generating more than $30 billion worth of business a year. A graduate of Texas Tech University, Stevenson spent the years between his formal education and the formation of Ciber working as a programmer analyst for International Business Machines Corporation (IBM) and LTV Steel in Houston. By the early 1970s, when Stevenson was in his early 30s, he and two other colleagues decided to make a go of it on their own and organized Ciber, an acronym for "consultants in business, engineering, and research."
At the time, Stevenson and Ciber's other co-founders perceived a need in the corporate world for specialized, technical assistance in keeping pace with the technological advances in computer hardware and computer software. The trio saw an opportunity to provide contract computer consulting services to clients lacking either in the resources or the expertise to use the promising power of computers in their day-to-day operations. Through Ciber, the founders tapped into a market that would grow explosively in the decades ahead. Few realized at the time how important computers would become to the business world. As the use of computers increased and wave after wave of computer innovations swept away yesterday's technological vanguard, the need for sophisticated service firms like Ciber to implement the frequently indecipherable technology of tomorrow grew exponentially.
Although Ciber entered the business of computer consulting services at a relatively early time, the company's physical and financial growth did not mirror the growth of its industry. Ciber grew at a modest pace initially, then embraced a new business strategy during the mid-1980s that ignited prolific growth. Stevenson watched over Ciber during both of the company's two eras, heading the company during its contrastingly slower period of growth and leading the charge during its decided rise during the 1990s.
During Ciber's inaugural year of business, Stevenson served as the company's vice-president in charge of recruiting and managing the fledgling firm's technical staff, a post he would occupy until November 1977, when he was named Ciber's chief executive officer after the tragic accidental death of the CEO and Co-founder, Richard L. Ezinga. From late 1977 into the 1990s, Stevenson was responsible for all of Ciber's operations. At first, Stevenson and the two other co-founders targeted their consulting services exclusively to the automotive industry, establishing Ciber's first office in the hotbed of automotive production in the United States, Detroit, Michigan. Ciber did not remain wedded to the automotive industry for long, however. A few short years after its formation, Ciber began tailoring its services to the oil and gas industry as well, a move that occurred at roughly the same time as the company's geographic expansion. Two years after the company opened its doors in Detroit, an office in Phoenix was opened. A year later, in 1977, an office was established in Houston. A Denver office was opened in 1979, followed by the opening of a Dallas office in 1980 and an Atlanta office in 1987. The following year, Ciber relocated its corporate headquarters to Englewood, Colorado. While executive officers circulated throughout Ciber's Englewood facility, the company embarked on the most prolific growth period in its history to that point.
1980s
A year after the move to Englewood and 15 years after its founding, Ciber competed in the burgeoning industry of computer consulting services as a minor player. Total sales in 1989 amounted to a mere $13 million, small change when compared with the revenue volume generated by the country's leading computer consulting firms. By this point, however, Ciber executives were plotting an era of dramatic growth for their company. During the mid-1980s, Stevenson and other Ciber executives adopted a new growth strategy that focused on the development of a new range of services and the realization of both physical and financial growth through the acquisition of established computer consulting firms. Although the strategy embraced during the mid-1980s would take half a decade to manifest itself in any meaningful way, once the strategy for the future began to take shape in a tangible form, Ciber began its resolute rise to the upper echelon of its industry.
By the end of 1989, when annual sales had slipped past the $10 million mark, the plans formulated midway through the decade moved from the drawing board to implementation. Ciber's expansion in 1990 included the opening of offices in Cleveland, Orlando, and Tampa, moves that were associated with the development of new clientele in the telecommunications industry. As Ciber focused its marketing efforts toward telecommunications providers during the early 1990s, securing contracts with industry giants such as AT&T, GTE, and U.S. West Communications, Inc., the company found itself occupying fertile ground in the computer consulting market. Not only were computers and their technology becoming increasingly sophisticated, progressing at a pace that demanded the help of experts such as Ciber's consultants, but the shifting dynamics of the corporate world also favored companies like Ciber.
The early years of the 1990s were marked by a national economic recession that forced many of the country's corporations to alter their business strategies. As business declined and profit margins shrank, downsizing became the mantra of business leaders from coast to coast. Payroll was trimmed, entire departments were cut from corporate budgets, and, as a consequence, many companies found themselves lacking the resources and skills to perform certain tasks in-house, creating a greater need for the specialized services offered by Ciber. To meet this demand, Ciber contracted out specialists to help the nation's largest corporations complete computer projects and cope with hardware and software problems as they arose. Ciber consultants wrote and maintained software that performed a host of chores, including inventory control, accounts payable, and customer support.
Although the conditions were ripe for rapid growth as the 1990s began, Ciber's stature at the start of the decade prohibited it to a certain degree from capturing a sizable share of the computer consulting market. The company was too small to realize the growth potential that surrounded it. Mac J. Slingerlend, who joined the company in 1989 as executive vice-president and chief financial officer before being named president and chief operating officer in 1996, reflected on Ciber's diminutive size years after the company had grown into a genuine national contender, noting, "We wanted to be a survivor. We were the smallest national player, and we needed to get larger quickly."
1990s
Getting larger quickly ranked as Ciber's chief objective during the first half of the 1990s, engendering a period of growth that lifted the company's revenue volume from the $13 million recorded when Slingerlend joined the company to more than $150 million by the time he was promoted to the twin posts of president and chief operating officer. Growth was achieved largely by purchasing established computer consulting firms, as Ciber embarked on an acquisition program that ranked it as the most active computer consulting acquirer in the nation during the first half of the 1990s. More than a dozen acquisitions were completed in six years' time, adding more than $70 million to the company's revenue base and greatly increasing the Colorado-based firm's national presence. Equally as important as the growth achieved through acquisition was the added expertise Ciber gained by swallowing up established computer consulting firms. During the 1990s, the push was on to grow larger quickly and to gain personnel that would enable Ciber to tackle more complex projects. Instead of just writing programs tailored to the specifications of its clients, Ciber executives were endeavoring to create a consulting firm that could identify problems and provide solutions, a transformation that would propel the company into the market for higher-margin services.
The majority of the acquisitions that helped Ciber expand its services and broaden its national presence were completed after the company's initial public offering of stock in March 1994. Once the company converted to public ownership (Stevenson retained control of more than 50 percent of the company's shares), acquisitions followed in steady succession. In June 1994, Ciber acquired all of the business operations of $16-million-in-sales C.P.U., Inc. for approximately $10 million. Based in Rochester, New York, C.P.U. operated as a computer consulting firm employing 190 consultants in six branch offices and served clients such as Northern Telecom and Xerox Corporation. The C.P.U. acquisition was Ciber's fifth of the decade and by far the largest. In the coming two years, as expansion picked up pace, annual sales more than doubled, and the company's net income, inflated by the move into more complex, higher-margin services, nearly quadrupled.
Following the C.P.U. acquisition, Ciber purchased Holmdel, New Jersey-based Interface Systems, Inc., a systems-consulting firm with 48 consultants and $5 million in annual revenue. Interface Systems was acquired in January 1995 and was followed by the May 1995 acquisition of Spencer & Spencer Systems, Inc., a 141-consultant, $13-million-in-sales computer programming provider with offices in St. Louis and Indianapolis. Next, in June 1995, Ciber reached across to the West Coast and acquired Concord, California-based Business Information Technology, Inc., a five-branch, 125-consultant computer consulting firm with $20 million in annual sales. A fourth acquisition was completed before the end of 1995 when Ciber purchased Broadway & Seymour, Inc., its Rochester, Minnesota office, and its 45 consultants. By the end of 1995, sales had increased from the $79.8 million generated in 1994 to more than $120 million, and company executives were set to launch Ciber's CIBR2000 division, a venture representative of the company's desire to provide more complex, higher-margin services.
Introduced in December 1995, CIBR2000 service was designed as a solution to a potentially devastating problem with wide-ranging ramifications. Many software programs written between the 1960s and 1990s used a two-digit date format to record calendar dates, thereby rendering a host of computer calculations inaccurate after 11:59:59 p.m., December 31, 1999. Without the ability to recognize "00" as the beginning of the new century, computer programs that performed calculations related to inventory control, invoices, interest payments, pension payments, contract expirations, license and lease renewals, and myriad other tasks would generate false reports, create computer "bugs," and perhaps cause systemwide shutdowns, all under the presumption that "00" signified the year 1900. Ciber's CIBR2000 division was created to solve the dilemma posed by the century date change and represented an area of substantial growth potential for the company during the latter half of the 1990s.
At the time CIBR2000 service was being introduced, Ciber employed roughly 1,800 consultants and operated 28 branch offices scattered throughout the country. More than half of the company's sales was derived from 20 clients, including industry stalwarts such as American Express Company, AT&T, Ford Motor Company, IBM, MCI Telecommunications, Mellon Bank, Monsanto Corp., U.S. West Communications, Inc., and Xerox Corporation.
On the acquisition front, 1996 proved to be a busy year, eclipsing the achievements of 1995. In March, the company acquired Columbus, Ohio-based OASYS, Inc., a provider of contract computer programming services that gave Ciber a new geographic location in Columbus supported by 20 information technology consultants. In May 1996, Ciber acquired Practical Business Solutions, Inc., an information technology company with offices in Boston and Providence, Rhode Island. Two months later, the company completed yet another acquisition, purchasing the Business Systems Development division of DataFocus, Inc., a computer consulting firm with offices in Fairfax, Virginia and Edison, New Jersey. Not stopping there, Ciber brought another company under its corporate umbrella in September, when it acquired Spectrum Technology Group, Inc., a management consulting firm based in Somerville, New Jersey that strengthened Ciber's management consulting and project management services.
References
Sources
Washington Technology - CIBER was selected as one of the leaders in the Government solutions sector
CDC Uses CIBER-Built Solution to Alert Public Health Officials and Public Health Departments
External links
Consulting firms established in 1974
1974 establishments in Michigan
|
931106
|
https://en.wikipedia.org/wiki/CTIA%20and%20GTIA
|
CTIA and GTIA
|
Color Television Interface Adaptor (CTIA) and its successor Graphic Television Interface Adaptor (GTIA) are custom chips used in the Atari 8-bit family of computers and in the Atari 5200 home video game console. In these systems, a CTIA or GTIA chip works together with ANTIC to produce the video display. ANTIC generates the playfield graphics (text and bitmap) while CTIA/GTIA provides the color for the playfield and adds overlay objects known as player/missile graphics (sprites). Under the direction of Jay Miner, the CTIA/GTIA chips were designed by George McLeod with technical assistance of Steve Smith.
Color Television Interface Adaptor and Graphic Television Interface Adaptor are names of the chips as stated in the Atari field service manual. Various publications named the chips differently, sometimes using the alternative spelling Adapter or Graphics, or claiming that the "C" in "CTIA" stands for Colleen/Candy and "G" in "GTIA" is for George.
History
2600 and TIA
Atari had built their first display driver chip, the Television Interface Adaptor but universally referred to as the TIA, as part of the Atari 2600 console. The TIA display logically consisted of two primary sets of objects, the "players" and "missiles" that represented moving objects, and the "playfield" which represented the static background image on which the action took place. The chip used data in memory registers to produce digital signals that were converted in realtime via a digital-to-analog converter and RF modulator to produce a television display.
The conventional way to draw the playfield is to use a bitmap held in a frame buffer, in which each memory location in the frame buffer represents one or more locations on the screen. In the case of the 2600, which normally used a resolution of 160x192 pixels, a frame buffer would need to have at least 160x192/8 = 3840 bytes of memory. Built in an era where RAM was very expensive, the TIA could not afford this solution.
Instead, the system implemented a display system that used a single 20-bit memory register that could be copied or mirrored on the right half of the screen to make what was effectively a 40-bit display. Each location could be displayed in one of four colors, from a palette of 128 possible colors. The TIA also included several other display objects, the "players" and "missiles". These consisted of two 8-bit wide objects known as "players", a single 1-bit object known as the "ball", and two 1-bit "missiles". All of these objects could be moved to arbitrary horizontal locations via settings in other registers.
The key to the TIA system, and the 2600's low price, was that the system implemented only enough memory to draw a single line of the display, all of which held in registers. To draw an entire screen full of data, the user code would wait until the television display reached the right side of the screen and update the registers for the playfield and player/missiles to correctly reflect the next line on the display. This scheme drew the screen line-by-line from program code on the ROM cartridge, a technique known as "racing the beam".
CTIA
Atari initially estimated that the 2600 would have short market lifetime of three years when it was designed in 1976, which meant the company would need a new design by 1979. Initially this new design was simply an updated 2600-like game console, and was built around a similar basic design, simply updated. Work on what would become the CTIA started in 1977, and aimed at delivering a system with twice the resolution and twice the number of colours. Moreover, by varying the number of colours in the playfield, much higher resolutions up to 320 pixels horizontally could be supported. Players and missiles were also updated, including four 8-bit players and four 2-bit missiles, but also allowing an additional mode to combine the four missiles into a fifth player.
Shortly after design began, the home computer revolution started in earnest in the later half of 1977. In response, Atari decided to release two versions of the new machine, a low-end model as a games console, and a high-end version as a home computer. In either role, a more complex playfield would be needed, especially support for character graphics in the computer role. Design of the CTIA was well advanced at this point, so instead of a redesign a clever solution was provided by adding a second chip that would effectively automate the process of racing the beam. Instead of the user's programming updating the CTIA's registers based on its interrupt timing, the new ANTIC would handle this chore, reading data from a framebuffer and feeding that to the CTIA on the fly.
As a result of these changes, the new chips provide greatly improved number and selection of graphics modes over the TIA. Instead of a single playfield mode with 20 or 40 bits of resolution, the CTIA/ANTIC pair can display six text modes and eight graphics modes with various resolutions and color depths, allowing the programmer to choose a balance between resolution, colours, and memory use for their display.
CTIA vs. GTIA
The original design of the CTIA chip also included three additional color interpretations of the normal graphics modes. This feature provides alternate expressions of ANTIC's high-resolution graphics modes presenting 1 bit per pixel, 2 colors with one-half color clock wide pixels as 4 bits per pixel, up to 16 colors, two-color clock wide pixels. This feature was ready before the computers' November 1979 debut, but was delayed so much in the development cycle that Atari had already ordered a batch of about 100,000 CTIA chips with the graphics modes missing. Not wanting to throw away the already-produced chips, the company decided to use them in the initial release of the Atari 400 and 800 models in the US market. The CTIA-equipped computers, lacking the 3 extra color modes, were shipped until October–November 1981. From this point, all new Atari units were equipped with the new chip, now called GTIA, that supported the new color interpretation modes.
The original Atari 800/400 operating system supported the GTIA alternate color interpretation modes from the start, which allowed for easy replacement of the CTIA with the GTIA once it was ready. Atari authorized service centers would install a GTIA chip in CTIA-equipped computers free of charge if the computer was under warranty; otherwise the replacement would cost $62.52.
GTIA was also mounted in all later Atari XL and XE computers and Atari 5200 consoles.
Features
The list below describes CTIA/GTIA's inherent hardware capabilities meaning the intended functionality of the hardware itself, not including results achieved by CPU-serviced interrupts or display kernels driving frequent register changes.
CTIA/GTIA is a television interface device with the following features:
Interprets the Playfield graphics data stream from ANTIC to apply color to the display.
Merges four Player and four Missile overlay objects (aka sprites) with ANTIC's Playfield graphics. Player/Missile features include:
Player/Missile pixel positioning is independent of the Playfield:
Player/Missile objects function normally in the vertical and horizontal overscan areas beyond the displayed Playfield.
Player/Missile objects function normally without an ANTIC Playfield.
Eight-bit wide Player objects and two-bit wide Missile objects where each bit represents one displayed pixel.
Variable pixel width (1, 2, or 4 color clocks wide)
Each Player/Missile object is vertically the height of the entire screen.
Variable pixel height when the data is supplied by ANTIC DMA (single or double scan lines per data)
Ability to independently shift each P/M object by one scan line vertically when operating on double scan lines per data.
Each Player and its associated Missile has a dedicated color register separate from the Playfield colors.
Multiple priority schemes for the order of graphics layers (P/M Graphics vs playfield)
Color merging between Players and Playfield producing extra colors.
Color merging between pairs of Players producing multi-color Players.
Missiles can be grouped together into a Fifth Player that uses a separate color register.
Collision detection between Players, Missiles, and Playfield graphics.
There are no fixed colors for normal (CTIA) color interpretation mode. All colors are generated via indirection through nine color registers. (Four for Player/Missile graphics, four for the Playfield, and one shared between the Playfield and the Fifth Player feature.)
Normal color interpretation mode provides choice of colors from a 128 color palette (16 colors with 8 luminance values for each)
A GTIA color interpretation mode can generate 16 luminances per color providing a 256 color palette.
The GTIA version of the chip adds three alternate color interpretation modes for the Playfield graphics.
16 shades of a single hue from the 16 possible hues in the Atari palette. This is accessible in Atari BASIC as Graphics 9.
15 hues in a single shade/luminance value plus background. This is accessible in Atari BASIC as Graphics 11.
9 colors in any hue and luminance from the palette accomplished using all the Player/Missile and Playfield color registers for the Playfield colors. This is accessible in Atari BASIC as Graphics 10.
Reads the state of the joystick triggers (bottom buttons only for the Atari 5200 controllers).
It includes four input/output pins that are used in different ways depending on the system:
In Atari 8-bit computers, three of the pins are used to read state of the console keys (Start/Select/Option).
The fourth pin controls the speaker built into the Atari 400/800 to generate keyboard clicks. On later models there is no speaker, but the key click is still generated by GTIA and mixed with the regular audio output.
In the Atari 5200, the pins are used as part of the process to read the controller keyboards.
Versions
by part number
C012295 — NTSC CTIA
C014805 — NTSC GTIA
C014889 — PAL GTIA
C020120 — French SECAM GTIA (FGTIA)
Atari, Inc. intended to combine functions of the ANTIC and GTIA chips in one integrated circuit to reduce production costs of Atari computers and 5200 consoles. Two such prototype circuits were being developed, however none of them entered production.
C020577 — CGIA
C021737 — KERI
Pinout
Registers
The Atari 8-bit computers map CTIA/GTIA to the $D0xxhex page and the Atari 5200 console maps it to the $C0xxhex page.
CTIA/GTIA provides 54 Read/Write registers controlling Player/Missile graphics, Playfield colors, joystick triggers, and console keys. Many CTIA/GTIA register addresses have dual purposes performing different functions as a Read vs a Write register. Therefore, no code should read Hardware registers expecting to retrieve the previously written value.
This problem is solved for many write registers by Operating System Shadow registers implemented in regular RAM as places to store the last value written to registers. Operating System Shadow registers are copied from RAM to the hardware registers during the Vertical Blank. Therefore, any write to hardware registers which have corresponding shadow registers will be overwritten by the value of the Shadow registers during the next Vertical Blank.
Some Write registers do not have corresponding Shadow registers. They can be safely written by an application without the value being overwritten during the vertical blank. If the application needs to know the last state of the register then it is the responsibility of the application to remember what it wrote.
Operating System Shadow registers also exist for some Read registers where reading the value directly from hardware at an unknown stage in the display cycle may return inconsistent results.
In the individual register listings below the following legend applies:
Player/Missile Horizontal Coordinates
These registers specify the horizontal position in color clocks of the left edge (the high bit of the GRAF* byte patterns) of Player/Missile objects. Coordinates are always based on the display hardware's color clock engine, NOT simply the current Playfield display mode. This also means Player/Missile objects can be moved into overscan areas beyond the current Playfield mode.
Note that while Missile objects bit patterns share the same byte for displayed pixels (GRAFM) each Missile can be independently positioned. When the "fifth Player" option is enabled (See PRIOR/GPRIOR register) turning the four Missiles into one "Player" the Missiles switch from displaying the color of the associated Player object to displaying the value of COLPF3. The new "Player's" position on screen must be set by specifying the position of each Missile individually.
Player/Missile pixels are only rendered within the visible portions of the GTIA's pixel engine. Player/Missile objects are not rendered during the horizontal blank or the vertical blank. However, an object can be partially within the horizontal blank. The objects' pixels that fall outside of the horizontal blank are then within the visible portion of the display and can still register collisions. The horizontal position range of visible color clocks is $22hex/34dec to $DDhex/221dec.
To remove a Player/Missile object from the visible display area horizontal positions (left) 0 and (right) $DEhex/222dec (or greater) will insure no pixels are rendered regardless of the size of the Player/Missile object and so no unintentional collisions can be flagged.
HPOSP0 $D000 Write
Horizontal Position of Player 0
HPOSP1 $D001 Write
Horizontal Position of Player 1
HPOSP2 $D002 Write
Horizontal Position of Player 2
HPOSP3 $D003 Write
Horizontal Position of Player 3
HPOSM0 $D004 Write
Horizontal Position of Missile 0
HPOSM1 $D005 Write
Horizontal Position of Missile 1
HPOSM2 $D006 Write
Horizontal Position of Missile 2
HPOSM3 $D007 Write
Horizontal Position of Missile 3
Below are the color clock coordinates of the left and right edges of the possible Playfield sizes, useful when aligning Player/Missile objects to Playfield components:
Player/Missile Size Control
Three sizes can be chosen: Normal, Double, and Quad width. The left edge (See Horizontal Coordinates) is fixed and the size adjustment expands the Player or Missile toward the right in all cases.
Normal - 1 bit (pixel) is 1 color clock wide
Double - 1 bit (pixel) is 2 color clocks wide
Quad - 1 bit (pixel) is 4 color clocks wide
Note that in Quad size a single Player/Missile pixel is the same width as an Antic Mode 2 text character. Player/Missile priority selection mixed with Quad width Player Missile graphics can be used to create multiple text colors per Mode line.
Each Player has its own size control register:
SIZEP0 $D008 Write
Size of Player 0
SIZEP1 $D009 Write
Size of Player 1
SIZEP2 $D00A Write
Size of Player 2
SIZEP3 $D00B Write
Size of Player 3
Player size controls:
Values:
SIZEM $D00C Write
All Missile sizes are controlled by one register, but each Missile can be sized independently of the others. When the "fifth Player" option is enabled (See PRIOR/GPRIOR register) turning the four Missiles into one "Player" the width is still set by specifying the size for each Missile individually.
Values:
Player/Missile Graphics Patterns
Each Player object has its own 8-bit pattern register. Missile objects share one register with 2 bits per each Missile. Once a value is set it will continue to be displayed on each scan line. With no other intervention by CPU or ANTIC DMA to update the values the result is vertical stripe patterns the height of the screen including overscan areas. This mode of operation does not incur a CPU or DMA toll on the computer. It is useful for displaying alternate colored borders and vertical lines separating screen regions.
GRAFP0 $D00D Write
Graphics pattern for Player 0
GRAFP1 $D00E Write
Graphics pattern for Player 1
GRAFP2 $D00F Write
Graphics pattern for Player 2
GRAFP3 $D010 Write
Graphics pattern for Player 3
Each Player is 8 bits (pixels) wide. Where a bit is set, a pixel is displayed in the color assigned to the color register associated to the Player. Where a bit is not set the Player object is transparent, showing Players, Missiles, Playfield pixels, or the background color. Pixel output begins at the horizontal position specified by the Player's HPOS value with the highest bit output first.
GRAFM $D011 Write
Graphics pattern for all Missiles
Each Missile is 2 bits (pixels) wide. Where a bit is set, a pixel is displayed in the color assigned to the color register for the Player associated to the Missile. When Fifth Player is enabled (see PRIOR/GPRIOR) the Missiles pixels all display COLPF3. Where a bit is not set the Missile object is transparent, showing Players, Missiles, Playfield pixels, or the background color. Pixel output begins at the horizontal position specified by the Missile's HPOS value with the highest bit output first.
Missile Values:
Player/Missile Collisions
CTIA/GTIA has 60 bits providing automatic detection of collisions when Player, Missile, and Playfield pixels intersect. A single bit indicates a non-zero pixel of the Player/Missile object has intersected a pixel of a specific color register. There is no collision registered for pixels rendered using the background color register/value. This system provides instant, pixel-perfect overlap comparison without expensive CPU evaluation of bounding box or image bitmap masking.
The actual color value of an object is not considered. If Player, Missile, Playfield, and Background color registers are all the same value making the objects effectively "invisible", the intersections of objects will still register collisions. This is useful for making hidden or secret objects and walls.
Obscured intersections will also register collisions. If a Player object priority is behind a Playfield color register and another Player object priority is higher (foreground) than the Playfield, and the foreground Player pixels obscure both the Playfield and the Player object behind the Playfield, then the collision between the Playfield and both the background and foreground Player objects will register along with the collision between the foreground and background Player objects.
Note that there is no Missile to Missile collision.
Player/Missile collisions can only occur when Player/Missile object pixels occur within the visible portions of the display. Player/Missile objects are not rendered during the horizontal blank or the vertical blank. The range of visible color clocks is 34 to 221, and the visible scan lines range from line 8 through line 247. Player/Missile data outside of these coordinates are not rendered and will not register collisions. An object can be partially within the horizontal blank. The objects' pixels that fall outside of the horizontal blank are within the visible portion of the display and can still register collisions.
To remove a Player/Missile object from the visible display area horizontal positions (left) 0 and (right) 222 (or greater) will insure no pixels are rendered regardless of the size of the Player/Missile object and so no unintentional collisions can be flagged.
Finally, Player, Missile, and Playfield objects collision detection is real-time, registering a collision as the image pixels are merged and output for display. Checking an object's collision bits before the object has been rendered by CTIA/GTIA will show no collision.
Once set, collisions remain in effect until cleared by writing to the HITCLR register. Effective collision response routines should occur after the targeted objects have been displayed, or at the end of a frame or during the Vertical Blank to react to the collisions and clear collisions before the next frame begins.
Because collisions are only a single bit, collisions are quite obviously not additive. No matter how many times and different locations a collision between pixels occurs within one frame there is only 1 bit to indicate there was a collision. A set collision bit informs a program that it can examine the related objects to identify collision locations and then decide how to react for each location.
Since HITCLR and collision detection is real-time, Display List Interrupts can divide the display into sections with HITCLR used at the beginning of each section and separate collision evaluation at the end of each section.
When the "fifth Player" option is enabled (See PRIOR/GPRIOR register) the only change is the Missiles 0 to 3 switch from displaying the color of the associated Player object to displaying the value of COLPF3. The new "Player's" collisions are still reported for the individual Missiles.
Player/Missile to Playfield Collisions
Each bit indicates a pixel of the Player/Missile object has intersected a pixel of the specified Playfield color object. There is no collision registered for the background color.
Obscured intersections will also register collisions. If a Player/Missile object priority is behind a Playfield color register and another Player/Missile object priority is higher (foreground) than the Playfield, and the foreground Player/Missile pixels obscure both the Playfield and the Player/Missile object behind the Playfield, then the collision between the Playfield and both the background and foreground Player/Missile objects will register.
High-resolution, 1/2 color clock pixel modes (ANTIC Modes 2, 3, and F) are treated differently. The "background" color rendered as COLPF2 where pixel values are 0 does not register a collision. High-resolution pixels are rendered as the luminance value from COLPF1. The pixels are grouped together in color clock-wide pairs (pixels 0 and 1, pixels 2 and 3, continuing to pixels 318 and 319). Where either pixel of the pair is 1 a collision is detected between the Player or Missile pixels and Playfield color COLPF2.
GTIA modes 9 and 11 do not process playfield collisions. In GTIA mode 10 Playfield collisions will register where Playfield pixels use COLPF0 through COLPF3
M0PF $D000 Read
Missile 0 to Playfield collisions
M1PF $D001 Read
Missile 1 to Playfield collisions
M2PF $D002 Read
Missile 2 to Playfield collisions
M3PF $D003 Read
Missile 3 to Playfield collisions
P0PF $D004 Read
Player 0 to Playfield collisions
P1PF $D005 Read
Player 1 to Playfield collisions
P2PF $D006 Read
Player 2 to Playfield collisions
P3PF $D007 Read
Player 3 to Playfield collisions
Missile to Player Collisions
Missiles collide with Players and Playfields. There is no Missile to Missile collision.
M0PL $D008 Read
Missile 0 to Player collisions
M1PL $D009 Read
Missile 1 to Player collisions
M2PL $D00A Read
Missile 2 to Player collisions
M3PL $D00B Read
Missile 3 to Player collisions
Player to Player Collisions
A collision between two players sets the collision bit in both Players' collision registers. When Player 0 and Player 1 collide, Player 0's collision bit for Player 1 is set, and Player 1's collision bit for Player 0 is set.
A Player cannot collide with itself, so its bit is always 0.
P0PL $D00C Read
Player 0 to Player collisions
P1PL $D00D Read
Player 1 to Player collisions
P2PL $D00E Read
Player 2 to Player collisions
P3PL $D00F Read
Player 3 to Player collisions
Player/Missile and Playfield Color and Luminance
All Player/Missile objects' pixels and all Playfield pixels in the default CTIA/GTIA color interpretation mode use indirection to specify color. Indirection means that the values of the pixel data do not directly specify the color, but point to another source of information for color. CTIA/GTIA contain hardware registers that set the values used for colors, and the pixels' information refer to these registers. The palette on the Atari is 8 luminance levels of 16 colors for a total 128 colors. The color indirection flexibility allows a program to tailor the screen's colors to fit the purpose of the program's display.
All hardware color registers have corresponding shadow registers.
COLPM0 $D012 Write
SHADOW: PCOLOR0 $02C0
Color/luminance of Player and Missile 0.
When GTIA 9-color mode is enabled (PRIOR/GPRIOR value $80) this register is used for the border and background (Playfield pixel value 0), rather than COLBK.
COLPM1 $D013 Write
SHADOW: PCOLOR1 $02C1
Color/luminance of Player and Missile 1.
COLPM2 $D014 Write
SHADOW: PCOLOR2 $02C2
Color/luminance of Player and Missile 2.
COLPM3 $D015 Write
SHADOW: PCOLOR3 $02C3
Color/luminance of Player and Missile 3.
COLPF0 $D016 Write
SHADOW: COLOR0 $02C4
Color/luminance of Playfield 0.
COLPF1 $D017 Write
SHADOW: COLOR1 $02C5
Color/luminance of Playfield 1.
This register is used for the set pixels (value 1) in ANTIC text modes 2 and 3, and map mode F. Only the luminance portion is used and is OR'd with the color value of COLPF2. In other Character and Map modes this register provides the expected color and luminance for a pixel.
COLPF2 $D018 Write
SHADOW: COLOR2 $02C6
Color/luminance of Playfield 2.
This register is used for Playfield background color of ANTIC text modes 2 and 3, and map mode F. That is, where pixel value 0 is used. In other Character and Map modes this register provides the expected color and luminance for a pixel.
COLPF3 $D019 Write
SHADOW: COLOR3 $02C7
Color/luminance of Playfield 3
COLPF3 is available is several special circumstances:
When Missiles are converted to the "fifth Player" they switch from displaying the color of the associated Player object to displaying COLPF3 and change priority. See PRIOR/GPRIOR register.
Playfield Text Modes 4 and 5. Inverse video characters (high bit $80 set) cause CTIA/GTIA to substitute COLPF3 value for COLPF2 pixels in the character matrix. (See ANTIC's Glyph Rendering)
Playfield Text Modes 6 and 7. When the character value has bits 6 and 7 set (character range $C0-FF) the entire character pixel matrix is displayed in COLPF3. (See ANTIC's Glyph Rendering)
This register is also available in GTIA's special 9 color, pixel indirection color mode.
COLBK $D01A Write
SHADOW: COLOR4 $02C8
Color/luminance of Playfield background.
The background color is displayed where no other pixel occurs through the entire overscan display area. The following exceptions occur for the background:
In ANTIC text modes 2 and 3, and map mode F the background of the playfield area where pixels may be rendered is from COLPF2 and the COLBK color appears as a border around the playfield.
In GTIA color interpretation mode $8 (9 color indirection) the display background color is provided by color register COLPM0 while COLBAK is used for Playfield pixel value $8.
In GTIA color interpretation mode $C (15 colors in one luminance level, plus background) uses COLBK to set the luminance level of all other pixels (pixel value $1 through $F). However, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0.
Color Registers' Bits:
The high nybble of the color register specifies one of 16 colors color ($00, $10, $20... to $F0).
The low nybble of the register specifies one of 16 luminance values ($00, $01, $02... to $0F).
In the normal color interpretation mode the lowest bit is not significant and only 8 luminance values are available ($00, $02, $04, $06, $08, $0A, $0C, $0E), so the complete color palette is 128 color values.
In GTIA color interpretation mode $4 (luminance-only mode) the full 16 bits of luminance values are available for Playfield pixels providing a palette of 256 colors. Any Player/Missile objects displayed in this mode are colored by indirection which still uses the 128 color palette.
In normal color interpretation mode the pixel values range from $0 to $3 ordinarily pointing to color registers COLBK, COLPF0, COLPF1, COLPF2 respectively. The color text modes also include options to use COLPF3 for certain ranges of character values. See ANTIC's graphics modes for more information.
When Player/Missile graphics patterns are enabled for display where the graphics patterns bits are set the color displayed comes from the registers assigned to the objects.
There are exceptions for color generation and display:
ANTIC Text modes 2 and 3, and Map mode F:
The pixel values in these modes is only $0 and $1. The $0 pixels specify the Playfield background which is color register COLPF2. The $1 pixels use the color component of COLPF2, and the luminance specified by COLPF1. The border around the Playfield uses the color from COLBK.
ANTIC Text modes 2 and 3, and Map mode F behave differently with Player/Missile graphics from the other modes. COLPF1 used for the glyph or graphics pixels always has the highest priority and cannot be obscured by Players or Missiles. The color of COLPF1 always comes from the "background" which is ordinarily COLPF2. Therefore, where Players/Missiles and Fifth Player have priority over COLPF2 the COLPF1 glyph/graphics pixels use the color component of the highest priority color (Player or Missile), and the luminance component of COLPF1. This behavior is consistent where Player/Missile priority conflicts result in true black for the "background". In summary, the color CTIA/GTIA finally determines to use "behind" the high-res pixel is then used to "tint" the COLPF1 foreground glyph/graphics pixels.
GTIA Exceptions
GTIA color interpretation mode $8 (9 color indirection) uses color register COLPM0 for the display background and border color while COLBAK is used for Playfield pixel value $8.
GTIA color interpretation mode $C (15 colors in one luminance level, plus background) uses COLBK to set the luminance level of all other pixels (pixel value $1 through $F). However, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0. Note that the background's color component is also OR'd with the other pixels' colors. Therefore, the overall number of colors in the mode is reduced when the background color component is not black (numerically zero).
Player/Missile Exceptions:
Player/Missile Priority value $0 (See PRIOR/GPRIOR) will cause overlapping Player and Playfield pixels to be OR'd together displaying a different color.
Conflicting Player/Missile Priority configuration will cause true black (color 0, luma 0) to be output where conflicts occur.
The Player/Missile Multi-Color option will cause overlapping Player pixels to be OR'd together displaying a different color.
Color Registers' Use per ANTIC Character Modes:
Color Registers' Use per ANTIC Map Modes:
Color Registers' Use per GTIA Modes (ANTIC F):
Player/Missile colors are always available for Player/Missile objects in all modes, though colors may be modified when the special GTIA modes (16 shades/16 color) are in effect.
Miscellaneous Player/Missile and GTIA Controls
PRIOR $D01B Write
SHADOW: GPRIOR $026F
This register controls several CTIA/GTIA color management features: The GTIA Playfield color interpretation mode, Multi-Color Player objects, the Fifth Player, and Player/Missile/Playfield priority.
GTIA Playfield Color Interpretations
CTIA includes only one default color interpretation mode for the ANTIC Playfield data stream. That is the basic functionality assumed in the majority of the ANTIC and CTIA/GTIA discussion unless otherwise noted. GTIA includes three alternate color interpretations modes for Playfield data. These modes work by pairing adjacent color clocks from ANTIC, thus the pixels output by GTIA are always two color clocks wide. Although these modes can be engaged while displaying any ANTIC Playfield Mode, the full color palette possible with these GTIA color processing options are only realized in the ANTIC Modes based on 1/2 color clock pixels (ANTIC modes 2, 3, F.) These GTIA options are most often used with a Mode F display. The special GTIA color processing modes also alter the display or behavior of Player/Missile graphics in various ways.
The color interpretation control is a global function of GTIA affecting the entire screen. GTIA is not inherently capable of mixing on one display the various GTIA color interpretation modes and the default CTIA mode needed for most ANTIC Playfields. Mixing color interpretation modes requires software writing to the PRIOR register as the display is generated (usually, by a Display List Interrupt).
PRIOR bits 7 and 6 provide four values specifying the color interpretation modes:
16 Shades
This mode uses the COLBK register to specify the background color. Rather than using indirection, pixel values directly represent Luminance. This mode allows all four luminance bits to be used in the Atari color palette and so is capable of displaying 256 colors.
Player/Missile graphics (without the fifth Player option) display properly in this mode, however collision detection with the Playfield is disabled. Playfield priority is always on the bottom. When the Missiles are switched to act as a fifth Player then where the Missile objects overlap the Playfield the Missile pixels luminance merges with the Playfield pixels' Luminance value.
9 Color
Unlike the other two special GTIA modes, this mode is entirely driven by color indirection. All nine color registers work on the display for pixel values 0 through 8. The remaining 7 pixel values repeat previous color registers.
The pixels are delayed by one color clock (half a GTIA mode pixel) when output. This offset permits interesting effects. For an example, page flipping rapidly between this mode and a different GTIA mode produces a display with apparent higher resolution and greater number of colors.
This mode is unique in that is uses color register COLPM0 for the border and background (Playfield 0 value pixels) rather than COLBK.
Player/Missile graphics display properly with the exception that Player/Missile 0 are not distinguishable from the background pixels, since they use the same color register, COLPM0. The Playfield pixels using the Player/Missile colors are modified by priority settings as if they were Player/Missile objects and so can affect the display of Players/Missiles. (See discussion later about Player/Missile/Playfield priorities).
The Playfield pixels using Player/Missile colors do not trigger collisions when Player/Missile objects overlay them. However, Player/Missile graphics overlapping Playfield colors COLPF0 to COLPF3 will trigger the expected collision.
16 Colors
This mode uses the COLBK register to specify the luminance of all Playfield pixels (values $1hex/1dec through $Fhex/15dec.) The least significant bit of the luminance value is not observed, so only the standard/CTIA 8 luminance values are available ($0, $2, $4, $6, $8, $A, $C, $E). Additionally, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0. As with the Luminance mode indirection is disabled and pixel values directly represent a color.
Note that the color component of the background also merges with the playfield pixels. Colors other than black for the background reduce the overall number of colors displayed in the mode.
Player/Missile graphics (without the fifth Player option) display properly in this mode, however collision detection with the Playfield is disabled. Playfield priority is always on the bottom. When the Missiles are switched to act as a fifth Player then where the Missile objects overlap the Playfield the Missile pixels inherit the Playfield pixels' Color value.
Multi-Color Player
PRIOR bit 5, value $20hex/32dec enables Multi-Color Player objects. Where pixels of two Player/Missile objects overlap a third color appears. This is implemented by eliminating priority processing between pairs of Player/Missile objects resulting in CTIA/GTIA performing a bitwise OR of the two colored pixels to output a new color.
Example: A Player pixel with color value $98hex/152dec (blue) overlaps a Player pixel with color value $46hex/70dec (red) resulting in a pixel color of $DEhex/228dec (light green/yellow).
The Players/Missiles pairs capable of Multi-Color output:
Player 0 + Player 1
Missile 0 + Missile 1
Player 2 + Player 3
Missile 2 + Missile 3
Fifth Player
PRIOR bit 4, value $10hex/16dec enables Missiles to become a fifth Player. No functional change occurs to the Missile other than the color processing of the Missiles. Normally the Missiles display using the color of the associated Player. When Fifth Player is enabled all Missiles display the color of Playfield 3 (COLPF3). Horizontal position, size, vertical delay, and Player/Missile collisions all continue to operate the same way. The priority of the Fifth Player for Player objects pixel intersections is COLPF3, but the Fifth Player's pixels have priority over all Playfield colors.
The color processing change also causes some exceptions for the Missiles' display in GTIA's alternative color modes:
GTIA 16 Shades mode: Where Missile pixels overlap the Playfield the pixels inherit the Playfield pixels' Luminance value.
GTIA 16 Colors mode: Where Missile pixels overlap the Playfield the pixels inherit the Playfield pixels' Color value.
The Fifth Player introduces an exception for Priority value $8 (bits 1000) (See Priority discussion below.)
Priority
PRIOR bits 3 to 0 provide four Player/Missile and Playfield priority values that determine which pixel value is displayed when Player/Missile objects pixels and Playfield pixels intersect. The four values provide specific options listed in the Priority chart below. "PM" mean normal Player/Missile implementation without the Fifth Player. The Fifth Player, "P5", is shown where its priority occurs when it is enabled.
The chart is accurate for ANTIC Playfield Character and Map modes using the default (CTIA) color interpretation mode. GTIA color interpretation modes, and the ANTIC modes based on high-resolution, 1/2 color clock pixels behave differently (noted later).
If multiple bits are set, then where there is a conflict CTIA/GTIA outputs a black pixel—Note that black means actual black, not simply the background color, COLBK.
Although the Fifth Player is displayed with the value of COLPF3, its priority is above all Playfield colors. This produces an exception for Priority value $8 (Bits 1000). In this mode Playfield 0 and 1 are higher priority than the Players, and the Players are higher priority than Playfield 2 and 3. Where Playfield 0 or 1 pixels intersect any Player pixel the result displayed is the Playfield pixel. However, if the Fifth player also intersects the same location, its value is shown over the Playfield causing it to appear as if Playfield 3 has the highest priority. If the Playfield 0 or 1 pixel is removed from this intersection then the Fifth Player's pixel has no Playfield pixel to override and so also falls behind the Player pixels.
When the Priority bits are all 0 a different effect occurs—Player and Playfield pixels are logically OR'd together in the a manner similar to the Multi-Color Player feature. In this situation Players 0 and 1 pixels can mix with Playfield 0 and 1 pixels, and Players 2 and 3 pixels can mix with Playfield 2 and 3 pixels. Additionally, when the Multi-Color Player option is used the resulting merged Players' color can also mix with the Playfield producing more colors. When all color merging possibilities are considered, the CTIA/GTIA hardware can output 23 colors per scan line. Starting with the background color as the first color, the remaining 22 colors and color merges are possible:
When Priority bits are all 0 the Missiles colors function the same way as the corresponding Players as described above. When Fifth Player is enabled, the Missile pixels cause the same color merging as shown for COLPF3 in the table above (colors 19 through 22).
Priority And High-Resolution Modes
The priority result differ for the Character and Map modes using high-resolution, 1/2 color clock pixels—ANTIC modes 2, 3, and F. These priority handling differences can be exploited to produce color text or graphics in these modes that are traditionally thought of as "monochrome".
In these ANTIC modes COLPF2 is output as the "background" of the Playfield and COLBK is output as the border around the Playfield. The graphics or glyph pixels are output using only the luminance component of COLPF1 mixed with the color component of the background (usually COLPF2).
The priority relationship between Players/Missiles, and COLPF2 work according to the priority chart below. Player/Missile pixels with higher priorities will replace COLPF2 as the "background" color. COLPF1 always has the highest priority and cannot be obscured by Players or Missiles. The glyph/graphics pixels use the color component of highest priority color (Playfield, Player, or Missile), and the luminance component of COLPF1. Note that this behavior is also consistent where Player/Missile priority conflicts result in true black for the "background". In effect, the color value CTIA/GTIA finally uses for the "background" color "tints" the COLPF1 foreground glyph/graphics pixels.
VDELAY $D01C Write
Vertical Delay P/M Graphics
This register is used to provide single scan line movement when Double Line Player/Missile resolution is enabled in ANTIC's DMACTL register. This works by masking ANTIC DMA updates to the GRAF* registers on even scan lines, causing the graphics pattern to shift down one scan line.
Since Single Line resolution requires ANTIC DMA updates on each scan line and VDELAY masks the updates on even scan lines, then this bit reduces Single line Player/Missile resolution to Double line.
GRACTL $D01D Write
Graphics Control
GRACTL controls CTIA/GTIA's receipt of Player/Missile DMA data from ANTIC and toggles the mode of Joystick trigger input.
Receipt of Player/Missile DMA data requires CTIA/GTIA be configured to receive the data. This is done with a pair of bits in GRACTL that match a pair of bits in ANTIC's DMACTL register that direct ANTIC to send Player data and Missile data. GRACTL's Bit 0 corresponds to DMACTL's Bit 2, enabling transfer of Missile data. GRACTL's Bit 1 corresponds to DMACTL's Bit 3, enabling transfer of Player data. These bits must be set for GTIA to receive Player/Missile data from ANTIC via DMA. When Player/Missile graphics are being operated directly by the CPU then these bits must be off.
The joystick trigger registers report the pressed/not pressed state in real-time. If a program's input polling may not be frequent enough to catch momentary joystick button presses, then the triggers can be set to lock in the closed/pressed state and remain in that state even after the button is released. Setting GRACTL Bit 2 enables the latching of all triggers. Clearing the bit returns the triggers to the unlatched, real-time behavior.
HITCLR $D01E Write
Clear Collisions
Any write to this register clears all the Player/Missile collision detection bits.
Other CTIA/GTIA Functions
Joystick Triggers
TRIG0 $D010 Read
SHADOW: STRIG0 $0284
Joystick 0 trigger
TRIG1 $D011 Read
SHADOW: STRIG1 $0285
Joystick 1 trigger.
TRIG2 $D012 Read
SHADOW: STRIG2 $0286
Joystick 2 trigger.
TRIG3 $D013 Read
SHADOW: STRIG3 $0287
Joystick 3 trigger
Bits 7 through 1 are always 0. Bit 0 reports the state of the joystick trigger. Value 1 indicates the trigger is not pressed. Value 0 indicates the trigger is pressed.
The trigger registers report button presses in real-time. The button pressed state will instantly clear when the button is released.
The triggers may be configured to latch, that is, lock, in the pressed state and remain that way until specifically cleared. GRACTL bit 2 enables the latch behavior for all triggers. Clearing GRACTL bit 2 returns all triggers to real-time behavior.
PAL $D014 Read
PAL flags.
This register reports the display standard for the system. When Bits 3 to 0 are set to 1 (value $fhex/15dec) the system is operating in NTSC. When the bits are zero the system is operating in PAL mode.
CONSPK $D01F Write
Console Speaker
Bit3 controls the internal speaker of the Atari 800/400. In later models the console speaker is removed and the sound is mixed with the regular POKEY audio signals for output to the monitor port and RF adapter. The Atari OS uses the console speaker to output the keyboard click and the bell/buzzer sound.
The Operating System sets the speaker bit during the Vertical Blank routine. Repeatedly writing 0 to the bit will produce a 60 Hz buzzing sound as the Vertical Blank resets the value. Useful tones can be generated using 6502 code effectively adding a fifth audio channel, albeit a channel requiring CPU time to maintain the audio tones.
CONSOL $D01F Read
Console Keys
A bit is assigned to report the state of each of the special console keys, Start, Select, and Option. Bit value 0 indicates a key is pressed and 1 indicates the key is not pressed. Key/Bit values:
Start Key = Bit value $1
Select Key = Bit value $2
Option Key = Bit value $4
Player/Missile Graphics (sprites) operation
A hardware "sprite" system is handled by CTIA/GTIA. The official ATARI name for the sprite system is "Player/Missile Graphics", since it was designed to reduce the need to manipulate display memory for fast-moving objects, such as the "player" and his weapons, "missiles", in a shoot 'em up game.
A Player is essentially a glyph 8 pixels wide and 256 TV lines tall, and has two colors: the background (transparent) (0 in the glyph) and the foreground (1). A Missile object is similar, but only 2 pixels wide. CTIA/GTIA combines the Player/Missile objects' pixels with the Playfield pixels according to their priority. Transparent (0) player pixels have no effect on the Playfield and display either a Playfield or background pixel without change. All Player/Missile objects' normal pixel width is one color clock. A register value can set the Player or Missile pixels' width to 1, 2, or 4 color clocks wide.
The Player/Missile implementation by CTIA/GTIA is similar to the TIA's. A Player is an 8-bit value or pattern at a specified horizontal position which automatically repeats for each scan line or until the pattern is changed in the register. Missiles are 2-bits wide and share one pattern register, so that four, 2-bit wide values occupy the 8-bit wide pattern register, but each missile has an independent horizontal position and size. Player/Missile objects extend the height of the display including the screen border. That is, the default implementation of Player/Missile graphics by CTIA/GTIA is a stripe down the screen. While seemingly limited this method facilitates Player/Missile graphics use as alternate colored vertical borders or separators on a display, and when priority values are set to put Player/Missile pixels behind playfield pixels they can be used to add additional colors to a display. All Players and Missiles set at maximum width and placed side by side can cover the entire normal width Playfield.
CTIA/GTIA supports several options controlling Player/Missile color. The PRIOR/GPRIOR register value can switch the four Missiles between two color display options—each Missile (0 to 3) expresses the color of the associated Player object (0 to 3) or all Missiles show the color of register COLPF3/COLOR3. When Missiles are similarly colored they can be treated as a fifth player, but correct placement on screen still requires storing values in all four Missile Horizontal Position registers. PRIOR/GPRIOR also controls a feature that causes the overlapping pixels of two Players to generate a third color allowing multi-colored Player objects at the expense of reducing the number of available objects. Finally, PRIOR/GPRIOR can be used to change the foreground/background layering (called, "priority") of Player/Missile pixels vs Playfield pixels, and can create priority conflicts that predictably affect the colors displayed.
The conventional idea of a sprite with an image/pattern that varies vertically is also built into the Player/Missile graphics system. The ANTIC chip includes a feature to perform DMA to automatically feed new pixel patterns to CTIA/GTIA as the display is generated. This can be done for each scan line or every other scan line resulting in Player/Missile pixels one or two scan lines tall. In this way the Player/Missile object could be considered an extremely tall character in a font, 8 bits/pixels wide, by the height of the display.
Moving the Player/Missile objects horizontally is as simple as changing a register in the CTIA/GTIA (in Atari BASIC, a single POKE statement moves a player or missile horizontally). Moving an object vertically is achieved by either block moving the definition of the glyph to a new location in the Player or Missile bitmap, or by rotating the entire Player/Missile bitmap (128 or 256 bytes). The worst case rotation of the entire bitmap is still quite fast in 6502 machine language, even though the 6502 lacks a block-move instruction found in the 8080. Since the sprite is exactly 128 or 256 bytes long, the indexing can be easily accommodated in a byte-wide register on the 6502. Atari BASIC lacks a high speed memory movement command and moving memory using BASIC PEEK()s and POKE(s) is painfully slow. Atari BASIC programs using Player/Missile graphics have other options for performing high speed memory moves. One method is calling a short machine language routine via the USR() function to perform the memory moves. Another option is utilizing a large string as the Player/Missile memory map and performing string copy commands which result in memory movement at machine language speed.
Careful use of Player/Missile graphics with the other graphics features of the Atari hardware can make graphics programming, particularly games, significantly simpler.
GTIA enhancements
The GTIA chip is backward compatible with the CTIA, and adds 3 color interpretations for the 14 "normal" ANTIC Playfield graphics modes. The normal color interpretation of the CTIA chip is limited, per scanline, to a maximum of 4 colors in Map modes or 5 colors in Text modes (plus 4 colors for Player/Missile graphics) unless special programming techniques are used. The three, new color interpretations in GTIA provide a theoretical total of 56 graphics modes (14 ANTIC modes multiplied by four possible color interpretations). However, only the graphics modes based on high-resolution, 1/2 color clock pixels (that is, Antic text modes 2, 3, and graphics mode F) are capable of fully expressing the color palettes of these 3 new color interpretations. The three additional color interpretations use the information in two color clocks (four bits) to generate a pixel in one of 16 color values. This changes a mode F display from 2 colors per pixel, 320 pixels horizontally, one scan line per mode line, to 16 colors and 80 pixels horizontally. The additional color interpretations allow the following:
GTIA color interpretation mode $4 -- 16 shades of a single hue (set by the background color, COLBK) from the 16 possible hues in the Atari palette. This is also accessible in Atari BASIC as Graphics 9.
GTIA color interpretation mode $8 -- This mode allows 9 colors of indirection per horizontal line in any hue and luminance from the entire Atari palette of 128 colors. This is accomplished using all the Player/Missile and Playfield color registers for the Playfield pixels. In this mode the background color is provided by color register COLPM0 while COLBAK is used for Playfield pixel value $8. This mode is accessible in Atari BASIC as Graphics 10,
GTIA color interpretation mode $C -- 15 hues in a single shade/luminance value, plus the background. The value of the background, COLBK sets the luminance level of all other pixels (pixel value $1 through $F). The least significant bit of the luminance value is not observed, so only the standard/CTIA 8 luminance values are available ($0, $2, $4, $6, $8, $A, $C, $E). Additionally, the background itself uses only the color component set in the COLBK register. The luminance value of the background is forced to 0. This mode is accessible in Atari BASIC as Graphics 11.
Of these modes, Atari BASIC Graphics 9 is particularly notable. It enables the Atari to display gray-scale digitized photographs, which despite their low resolution were very impressive at the time. Additionally, by allowing 16 shades of a single hue rather than the 8 shades available in other graphics modes, it increases the amount of different colors the Atari could display from 128 to 256. Unfortunately, this feature is limited for use in this mode only, which due to its low resolution was not widely used.
The Antic 2 and 3 text modes are capable of displaying the same color ranges as mode F graphics when using the GTIA's alternate color interpretations. However, since the pixel reduction also applies and turns 8 pixel wide, 2 color text into 2 pixel wide, 16 color blocks these modes are unsuitable for actual text, and so these graphics modes are not popular outside of demos. Effective use of the GTIA color interpretation feature with text modes requires a carefully constructed character set treating characters as pixels. This method allows display of an apparent GTIA "high resolution" graphics mode that would ordinarily occupy 8K of RAM to instead use only about 2K (1K for the character set, and 1K for the screen RAM and display list.)
The GTIA also fixed an error in CTIA that caused graphics to be misaligned by "half a color clock". The side effect of the fix was that programs that relied on color artifacts in high-resolution monochrome modes would show a different pair of colors.
Atari owners can determine if their machine is equipped with the CTIA or GTIA by executing the BASIC command POKE 623,64. If the screen blackens after execution, the machine is equipped with the new GTIA chip. If it stays blue, the machine has a CTIA chip instead.
Bugs
The last Atari XE computers made for the Eastern European market were built in China. Many if not all have a buggy PAL GTIA chip. The luma values in Graphics 9 and higher are at fault, appearing as stripes. Replacing the chip fixes the problem. Also, there have been attempts to fix faulty GTIA chips with some external circuitry.
See also
List of home computers by video hardware
References
External links
De Re Atari published by the Atari Program Exchange
Mapping the Atari, Revised Edition by Ian Chadwick
GTIA chip data sheet scanned to PDF.
jindroush site(archived) GTIA info
CTIA die shot
GTIA die shot
Atari 8-bit family
Graphics chips
Integrated circuits
Computer display standards
|
14449114
|
https://en.wikipedia.org/wiki/Dan%20Dworsky
|
Dan Dworsky
|
Daniel Leonard Dworsky (October 4, 1927 – January 19, 2022) was an American architect who was a longstanding member of the American Institute of Architects College of Fellows. Among other works, Dworsky designed Crisler Arena, the basketball arena at the University of Michigan named for Dworsky's former football coach, Fritz Crisler. Other professional highlights include designing Drake Stadium at UCLA, the Federal Reserve Bank in Los Angeles and the Block M seating arrangement at Michigan Stadium. He is also known for a controversy with Frank Gehry over the Walt Disney Concert Hall.
Previously, Dworsky was an American football linebacker, fullback and center who played professional football for the Los Angeles Dons of the All-America Football Conference in 1949, and college football for the Michigan Wolverines from 1945 to 1948. He was an All-American on Michigan's undefeated national championship teams in 1947 and 1948.
College football at the University of Michigan
Born in Minneapolis, Minnesota in 1927, Dworsky lived in the Twin Cities and Sioux Falls, South Dakota before attending the University of Michigan. Dworsky was a four-year starter for Fritz Crisler's Michigan Wolverines football teams from 1945 to 1948. He played linebacker, fullback, and center for the Michigan Wolverines and was a key player on the undefeated 1947 and 1948 Michigan football teams that won consecutive national championships. The 1947 team, anchored by Len Ford, Alvin Wistert, Dworsky and Rick Kempthorn, has been described as the best team in the history of Michigan football. Dworsky won a total of six varsity letters at Michigan, four in football and two in wrestling where he competed in the heavyweight division. Dworsky is among the famous Jews in football, and has been extensively profiled in encyclopedic Jewish publications. Dworsky married the former Sylvia Ann Taylor on August 10, 1957. The couple has three children: Douglas, Laurie and Nancy. They resided in Los Angeles.
1947 season
The 1947 Michigan Wolverines football team went 10–0 and outscored their opponents 394 to 53. Dworsky led a defensive unit that gave up an average of 5.3 points per game and shut out Michigan State (55–0), Pitt (60–0), Indiana (35–0), Ohio State (21–0), and USC (49–0). He also played fullback and center for the 1947 team and was named a third team All-American by the American Football Coaches Association. In a 1988 interview with the Los Angeles Times, Dworsky described the 1947 team's defensive scheme as follows: "We were an intelligent team and we had some complex defenses, the nature of which you see today. I called the defensive signals and we would shift people, looping, or stunting."
After going undefeated and winning the Big Ten championship, Michigan was invited to Pasadena to face the USC Trojans in the 1948 Rose Bowl—the Wolverines' first bowl game since 1901. Just before Christmas, the team boarded a train in Ann Abor for a three-day trip across the country. With little to do on the train, Alvin Wistert recalled that Dworsky entertained the team with music. "Dan Dworsky was a piano player. We'd gather around and sing. There was a piano in the last car."
After the long trip, the Wolverines beat the Trojans 49–0. Dworsky recalled that the coaching staff did an excellent job of scouting the Trojans. "When we went to the Rose Bowl, we had USC down pat. We knew their system as well as they did." The Trojans gained only 91 yards rushing and 42 yards passing, moving past midfield only twice. Dworsky played center during the Rose Bowl, blocking USC's All-American tackle (and future Los Angeles city councilman), John Ferraro.
In Dworsky's collegiate days, the final national rankings were determined before the bowl games. At the end of the regular season in 1947, Michigan was ranked No. 2 behind Notre Dame, but after defeating USC 49–0 in the Rose Bowl, the Associated Press held a special poll, and Michigan replaced Notre Dame as the national champion by a vote of 226 to 119. Dworsky later noted, "Notre Dame still claims that national championship and so do we."
1948 season
The 1948 Michigan Wolverines football team went 9–0 and outscored their opponents 252 to 44. The defensive unit led by Dworsky held its opponents to just 4.9 points per game, including shutouts against Oregon (14–0), Purdue (40–0), Northwestern (28–0), Navy (35–0), and Indiana (54–0). The 1948 Wolverines finished the season ranked No. 1 by the AP, but Big Ten Conference rules prohibited a team from playing in the Rose Bowl two years in a row. Dworsky did, however, play in the 1948 Blue–Gray All Star game.
Relationship with Fritz Crisler
Dworsky was a four-year starter under Michigan's legendary coach, Fritz Crisler. Dworsky later said that Crisler's "real genius" was in blending all the elements. The 1947 championship team included several older veteran players who had returned from military service. Dworsky recalled: "About half of us were 18-year old kids, and half were veterans. We had guys who were serious guys and guys who were excitable. Fritz struck a balance, so we never had to be pushed, but we never lost our focus either."
Dworsky recalled: "Crisler was not only an intellectual in strategy, but also in the way he ran practices.... He ran practices rigidly and we called him 'The Lord'. He would allow it to rain, or not. He was a Douglas MacArthur-type figure, handsome and rigid.... I sculpted him and gave him the bust in 1971." Dworsky also kept another bust of Crisler in his office.
Professional football with the Los Angeles Dons
In 1949, Dworsky was the first round draft pick of the Los Angeles Dons of the All-America Football Conference. The Dons were the first professional football team in Los Angeles. Dworsky played eleven games with the Dons in 1949, his only season in professional football. Dworsky played linebacker and blocking back for the Dons and had one interception and one kick return for 14 yards. The AAFC disbanded after the 1949 season, and Dworsky turned down an offer from the Pittsburgh Steelers to return to the University of Michigan where he graduated in 1950 with a degree in architecture. Dworsky later noted: "It was a toss-up whether I would become a pro football player or an architect. Being a linebacker is good conditioning for a young designer. You learn to block the bull coming at you from all sides."
Career as an architect
Overview of Dworsky's practice
After receiving his degree in architecture in 1950, Dworsky moved to Los Angeles and served as an apprentice in the early 1950s with prominent local early modernists William Pereira, Raphael Soriano, and Charles Luckman. In 1953, Dworsky began his own architecture firm in Los Angeles, known as Dworsky Associates. The firm grew into one of the most prominent architectural firms in California, creating major public buildings in California. Dworsky Associates won the 1984 Firm of the Year Award from the California Council of the American Institute of Architects. In September 2000, Dworsky Associates merged with CannonDesign and ceased to operate as an independent firm.
Architectural style
Dworsky belongs to the generation of post-World War II modernists that took its cues from the 1920s German Bauhaus and the French-Swiss master Le Corbusier. In 1988, Dworsky noted: "I am most intrigued by the essential mystery of architecture. For me, built space will always be a kind of theater, a stage on which life is played, and played out. That's why I keep on being an architect. Asked what inspires his architecture, Dworsky said he draws from the "solid, resolved concepts" of modern designers such as Le Corbusier and Marcel Bruer, while being encouraged on occasion to experiment by such "new wave" designers as Frank Gehry and Eric Owen Moss.
Crisler Arena and the Block "M"
Dworsky's first major commission was to design a basketball arena for his alma mater, the University of Michigan. The members of the 1947 Michigan Wolverines football team had reunions with Fritz Crisler every five years in Ann Arbor, and it was at one of those reunions that Crisler (by then the school's athletic director) gave Dworsky one of his big breaks, asking him to design the arena. Built in 1967, the arena was named Crisler Arena, as a tribute to the coach. Dworsky's design of the arena was well received and was said to demonstrate "his ability to combine majesty of scale with human accessibility". The roof of Crisler Arena is made of two plates, each weighing approximately 160 tons. The bridge-like construction allows them to expand or contract given the change of seasons or the weight of the snow. Crisler Arena remains the home of Michigan's basketball team and houses memorabilia and trophies from all Wolverine varsity athletic teams.
In 1965, the wooden benches at Michigan Stadium were replaced with blue fiberglass benches. Dworsky designed a yellow "Block M" for the stands on the eastern side of the stadium, just above the tunnel.
Drake Stadium at UCLA
After his work on Crisler Arena, Dworsky was commissioned by UCLA to design a track and field stadium on the university's central campus. Dworsky designed the stadium, known as Drake Stadium. Since its inaugural meet on February 22, 1969, the stadium has been the site of numerous championship meets, including the National AAU track & field championships in 1976, 1977, and 1978. It is also used each year for special campus events, such as the annual UCLA Commencement Exercises in June.
Walt Disney Concert Hall controversy
In February 1989, the Walt Disney Concert Hall Committee selected Dworsky as executive architect to work with designated architect Frank Gehry in designing the future home of the Los Angeles Philharmonic. Dworsky was selected to translate Gehry's conceptual designs into working drawings that would meet building code specifications. By 1994, the cost of the project had skyrocketed to $160 million (it eventually reached $274 million), and controversy halted the project. By 1996, a major donor was sought to complete the project by 2001 (four years behind schedule). Gehry and his design came under fire, and some considered him a spoiled, impractical artist.
Gehry publicly blamed Dworsky: "The executive architect was incapable of doing drawings that had this complexity. We helped select that firm. I went to Daniel, supposedly a friend, and I said, 'This is going to fail and we now have the capability to do it, so let us ghost-write it.'" Dworsky refused. Gehry was also quoted in the Los Angeles Times as saying: "We had the wrong executive architect doing the drawings. I helped pick him, I'm partly responsible. It brought us to a stop."" Gehry told Los Angeles magazine in 1996 that he "no longer speaks to his former friend (Dworsky)". Gehry continued his public attacks on Dworsky: "He (Dworsky) made a lot of money. He begged me for the job. I'd like to shoot him."
Dworsky was eventually told to stop working on the drawings before he completed them, but he defended himself against Gehry's criticism. "Knowledgeable people were supportive of us. They were saying it's a very complex and unusual design, and they can understand the difficulties in trying to achieve this within a limited budget and a limited schedule. It was unfortunate that Frank came out with his criticism, but he was the center of the storm, having designed the building, and he was just trying to lessen the blame on himself."
Dworsky also told the Los Angeles Times: "This is a one-of-a-kind building. You just don't simply open up the plans and understand them quickly." Dworsky's allies refer to Gehry's work as "confusing". Disney Hall official Frederick M. Nicholas also defended Dworsky's work against Gehry's attacks, denying that there were any problems with the Dworsky drawings not attributable to fast-tracking. Nicholas said: "They were not 'bad' drawings. It was a question of the subs not understanding them."
Personal life and death
Dworsky died in Los Angeles on January 19, 2022, at the age of 94.
Major works
The major works credited to Dworsky and his firm include the following:
The Jerry Lewis Neuromuscular Research Center at UCLA (1979).
The Tom Bradley International Terminal at Los Angeles International Airport (1984).
A planned community complex for the California School for the Blind in Fremont, California. The design won a merit award from the California AIA.
The Theater Arts Building at California State University Dominguez Hills. Dworsky cited the theater as one of his favorite projects. Photograph of Building
The Angelus Plaza residential complex in the Bunker Hill area of downtown Los Angeles (1982) Photograph of Building
The Ventura County Jail.
The Los Angeles Branch of the Federal Reserve Bank located at Grand Avenue and Olympic Boulevard in downtown Los Angeles (1987). Dworsky Associates won several awards for its design of the , $50 million building. Photograph of Building
The Northrop Electronics Division Headquarters in Hawthorne, California. Dworsky Associates received a Gold Nugget Grand Award for Best Commercial Office Building and top honors in the Crescent Architecture Awards competition for the design.
The Kilroy Airport Center in Long Beach, California, a complex of office buildings fronting the 405 Freeway with direct runway access to the Long Beach Airport for private aircraft (1987). Photograph of Building
The Westwood Terrace building on Sepulveda Boulevard in West Los Angeles, California occupied by New World Entertainment.Photograph of Building
The 20-story City Tower in Orange, California near the intersection of the Garden Grove (22) and Santa Ana (5) freeways in Orange County. Photograph of Building
The Home Savings building on Ventura Boulevard in Sherman Oaks, California.
The Metropolitan, a 14-story upscale rental complex in downtown Los Angeles’ South Park area.
The Van Nuys Municipal Court building in Van Nuys, California. Dworsky Associates received the Kaufman & Broad Award for Outstanding New Public or Civic Project for the design.
The Federal Office Building in Long Beach, California. Dworsky Associates was awarded a 1992 Design Award from the General Services Administration for its design of the federal building.
The renovation of the Carnation Building at 5055 Wilshire Boulevard in Hollywood. The renovated building was occupied by The Hollywood Reporter, Billboard, and other entertainment industry companies.
The Beverly Hills Main Post Office in Beverly Hills, California. Dworsky Associates received a Beautification Award from the Los Angeles Business Council for the design.
The San Joaquin County Jail in French Camp, California. Shortly after the prison opened, six prisoners escaped after cutting through a one-inch bar in the dayroom with a hacksaw. The prison break led to finger-pointing among the construction firm, the architect, and the prison guards over who was responsible for the lapse in security.
The UC Riverside Alumni and Visitors Center (1996). Photographs
The Thousand Oaks Civic Arts Plaza, a project on which Dworsky Associates teamed with New Mexico architect Antoine Predock. The New Mexico chapter of the AIA gave Predock and Dworsky Associates an award in 1996 for their work on the Civic Arts Plaza.
The Calexico Port of Entry building in Calexico, California. The innovative design won the highest award from the California AIA, and it won a Presidential Design Award from President Bill Clinton. Photos and Drawings of Award Winning Calexico Port of Entry
Beckman Hall at Chapman University in Orange, California (1999). Photograph of Building
The Lloyd D. George Federal Courthouse in Las Vegas, Nevada (2000). Photographs of Courthouse
The Hollywood-Highland station on the Metro B Line in the heart of Hollywood. Photograph of Station
Awards and honors
Dworsky has received numerous national, regional and community awards for design excellence, including the following:
Dworsky's numerous award-winning projects in his first 14 years of practice led to his election to the American Institute of Architects College of Fellows at the early age of 41.
Gold Medal Award from the Los Angeles Chapter of the American Institute of Architects
Lifetime Achievement Award for Distinguished Service from the American Institute of Architects, California Council, awarded in 2004. In granting the award, the Council noted that Dworsky had "made a major, positive impact on California architecture" and his "strong, simple sculpted work has provided a compelling statement for California architecture the past half century".
He was voted one of the twelve most distinguished architects in Los Angeles.
Dworsky Associates won the 1984 Firm of the Year Award from the American Institute of Architects, California Council, for "excellence in design of distinguished architecture" and reaching for a livelier style beyond the boundaries of conventional modernism.
He was honored by the Southern California Institute of Architecture in May 1986 for his professional accomplishments and his efforts on behalf of the school's scholarship program.
Dworsky was awarded a $3.5 million grant by the California Board of Corrections in 1982 to study the idea of the modular jail.
Dworsky served on the Architectural Evaluation Board for the County of Los Angeles.
Dworsky also served on the board of directors and the "directors circle" of the Southern California Institute of Architecture.
Notes
External links
Photo of Soboleski, Dworsky, Wistert, Elliott and Crisler at 1948 Rose Bowl
Photo of 1948 Rose Bowl Team - Dworsky 3rd from left in back row
1927 births
2022 deaths
20th-century American architects
21st-century American Jews
American football centers
Architects from California
Los Angeles Dons players
Michigan Wolverines football players
Players of American football from Minneapolis
Sportspeople from Sioux Falls, South Dakota
Jewish American sportspeople
Jewish architects
Taubman College of Architecture and Urban Planning alumni
American wrestlers
Players of American football from South Dakota
Players of American football from Los Angeles
|
37015750
|
https://en.wikipedia.org/wiki/Oduduwa%20University
|
Oduduwa University
|
Oduduwa University was established in 2009, it is a private higher education institution located in Ile Ife, Osun state. The university was officially accredited and recognized by the National Universities Commission of Nigeria as a higher education institution. The university is ranked 104 in the country and has a world rank of 10617. Oduduwa University offers courses and programs leading to officially recognized higher education degrees in several areas of study.
Oduduwa University is located in Ipetumodu, Ile Ife, Osun State, Nigeria. It was named after Oduduwa, the progenitor of the Yoruba people.
Colleges
The University is made up of four colleges:
College of Management and Social Sciences (CMSS) which consists of eight departments
Economics
Accounting
Banking & Finance
Business Administration
Mass Communication / Media Technology
Public Administration
International Relations
Political Science
College of Natural and Applied Sciences (CNAS) which consists of six departments
Physics
Chemical Sciences (Industria Chemistry, Chemistry, and Biochemistryl)
Biological Sciences
Mathematics and Statistics
Computer Science
Microbiology
College of Environmental Design and Management (CEDM) which consists of three departments
Estate Management
Quantity Surveying
Architecture
College of Engineering and Technology (CET) which consists of three departments
Computer Engineering
Electronic/Electrical Engineering
Mechanical Engineering
College of Management and Social Sciences (CMSS)
Only bachelor's degree programs are running for now.
Business Administration
Mass Communication & Media Technology
Economics
Banking & Finance
Accounting
Public Administration
Political Science
International Relations
College of Natural and Applied Sciences (CNAS)
Only bachelor's degree programs are running for now.
Mathematics/Statistics
Mathematics/Computer Science
Computer Science
Physics (Electronics)
Chemistry
Biochemistry
Microbiology/Pre-medicine
Industrial Chemistry
College of Environmental Design and Management (CEDM)
Only bachelor's degree Programs
Architecture
Estate Management
Quantity Surveying
College of Engineering and Technology (CET)
Only bachelor's degree Programs
Computer Engineering
Electronic/Electrical Engineering
Mechanical Engineering
Combination is available with Computer Science
Centres
The following centres are to complement academic and research activities of the University:
Centre for Information and Communications Technology (CICT)
Centre for Entrepreneurial and Vocational Training (CEV)
Centre for Professional Studies (CPS)
Centre for Cultural Studies (CCS)
Centre for Foundation and Extra-moral Studies (CFES)
Centre for International Studies/Exchange Programmes
Centre for Communication and Leadership Training (CCL)
All undergraduate students of the University go through the Centre of Information and Communication Technology and Centre of Entrepreneurial and Vocational Training
References
External links
Oduduwa University website
Educational institutions established in 2009
Universities and colleges in Nigeria
2009 establishments in Nigeria
|
490316
|
https://en.wikipedia.org/wiki/Yggdrasil%20Linux/GNU/X
|
Yggdrasil Linux/GNU/X
|
Yggdrasil Linux/GNU/X, or LGX (pronounced igg-drah-sill), is a discontinued early Linux distribution developed by Yggdrasil Computing, Incorporated, a company founded by Adam J. Richter in Berkeley, California.
Yggdrasil was the first company to create a live CD Linux distribution. Yggdrasil Linux described itself as a "Plug-and-Play" Linux distribution, automatically configuring itself for the hardware.
The last release of Yggdrasil was in 1995.
Yggdrasil is the World Tree of Norse mythology. The name was chosen because Yggdrasil took disparate pieces of software and assembled them into a complete product. Yggdrasil's company motto was "Free Software For The Rest of Us".
Yggdrasil is compliant with the Unix Filesystem Hierarchy Standard.
History and releases
Yggdrasil announced their ‘bootable Linux/GNU/X-based UNIX(R) clone for PC compatibles’ on 24 November 1992 and made the first release on 8 December 1992.
This alpha release contained the 0.98.1 version of the Linux kernel, the v11r5 version of the X Window System supporting up to 1024x768 with 256 colours, various GNU utilities such as their C/C++ compiler, the GNU Debugger, bison, flex, and make, TeX, groff, Ghostscript, the elvis and Emacs editors, and various other software. Yggdrasil's alpha release required a 386 computer with 8 MB RAM and 100 MB hard disk. The alpha release was missing some of the source code of some of the packages, such as elvis.
A beta release was made on 18 February 1993. The beta's cost was US$60. LGX's beta release in 1993 contained the 0.99.5 version of the Linux kernel, along with other software from GNU and X. By 22 August 1993, the Yggdrasil company had sold over 3100 copies of the LGX beta distribution.
The production release version carried a pricetag of US$99. However, Yggdrasil was offered for free to any developer whose software was included with the CD distribution. According to an email from the company's founder the marginal cost of each subscription was $35.70.
Early Yggdrasil releases were also available from stores selling CD-ROM software.
Yggdrasil Computing, Incorporated
Adam J. Richter started the Yggdrasil company together with Bill Selmeier. Richter spoke to Michael Tiemann about setting up a business, but was not interested in joining forces with Cygnus.
Richter was a member of League for Programming Freedom. Richter was using only a 200 MB hard disk when building the alpha release of LGX, which prevented him from practically being able to include the source code of some of the packages contained in the CDROM.
Yggdrasil Incorporated published some of the early Linux compilation books, such as The Linux Bible: The GNU Testament (), and contributed significantly to file system and X Window System functionality of Linux in the early days of their operation.
The company moved to San Jose, California in 1996. In 1996, Yggdrasil Incorporated released the Winter 1996 edition of Linux Internet Archives; six CDs of Linux software from Tsx-11 and Sunsite, the GNU archive on prep.ai.mit.edu, the X11R6 archives including the free contributed X11R6 software from ftp.x.org, the Internet RFC standards, and a total of nine non-Yggdrasil Linux distributions.
The company remained active until at least year 2000, when it released the Linux Open Source DVD, but its website was taken offline afterwards and the company has not released anything since.
The company's last corporate filing was in January 2004. The California Secretary of State lists it as suspended.
The company once made an offer to donate 60% of the Yggdrasil CDROM sales revenues to the Computer Systems Research Group, but founder Adam J. Richter later indicated that the company would lose too much money and changed the offer accordingly, while still maintaining donations to CSRG.
The company also had volume discount plans.
See also
Arena, a web browser once developed by Yggdrasil Computing
MCC Interim Linux
References
External links
Yggrasil Linux/GNU/X operating system distribution from 1995 (images)
ibiblio's mirror of 1996's release of Yggrasil Linux/GNU/X operating system distribution (docs)
DistroWatch on Yggdrasil
Discontinued Linux distributions
1992 software
Linux distributions
|
3271413
|
https://en.wikipedia.org/wiki/History%20of%20computer%20science
|
History of computer science
|
The history of computer science began long before our modern discipline of computer science, usually appearing in forms like mathematics or physics. Developments in previous centuries alluded to the discipline that we now know as computer science. This progression, from mechanical inventions and mathematical theories towards modern computer concepts and machines, led to the development of a major academic field, massive technological advancement across the Western world, and the basis of a massive worldwide trade and culture.
Prehistory
The earliest known tool for use in computation was the abacus, developed in the period between 2700 and 2300 BCE in Sumer. The Sumerians' abacus consisted of a table of successive columns which delimited the successive orders of magnitude of their sexagesimal number system. Its original style of usage was by lines drawn in sand with pebbles. Abaci of a more modern design are still used as calculation tools today, such as the Chinese abacus.
In the 5th century BC in ancient India, the grammarian Pāṇini formulated the grammar of Sanskrit in 3959 rules known as the Ashtadhyayi which was highly systematized and technical. Panini used metarules, transformations and recursions.
The Antikythera mechanism is believed to be an early mechanical analog computer. It was designed to calculate astronomical positions. It was discovered in 1901 in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to circa 100 BC.
Mechanical analog computer devices appeared again a thousand years later in the medieval Islamic world and were developed by Muslim astronomers, such as the mechanical geared astrolabe by Abū Rayhān al-Bīrūnī, and the torquetum by Jabir ibn Aflah. According to Simon Singh, Muslim mathematicians also made important advances in cryptography, such as the development of cryptanalysis and frequency analysis by Alkindus. Programmable machines were also invented by Muslim engineers, such as the automatic flute player by the Banū Mūsā brothers, and Al-Jazari's programmable humanoid automata and castle clock, which is considered to be the first programmable analog computer. Technological artifacts of similar complexity appeared in 14th century Europe, with mechanical astronomical clocks.
When John Napier discovered logarithms for computational purposes in the early 17th century, there followed a period of considerable progress by inventors and scientists in making calculating tools. In 1623 Wilhelm Schickard designed a calculating machine, but abandoned the project, when the prototype he had started building was destroyed by a fire in 1624. Around 1640, Blaise Pascal, a leading French mathematician, constructed a mechanical adding device based on a design described by Greek mathematician Hero of Alexandria. Then in 1672 Gottfried Wilhelm Leibniz invented the Stepped Reckoner which he completed in 1694.
In 1837 Charles Babbage first described his Analytical Engine which is accepted as the first design for a modern computer. The analytical engine had expandable memory, an arithmetic unit, and logic processing capabilities able to interpret a programming language with loops and conditional branching. Although never built, the design has been studied extensively and is understood to be Turing equivalent. The analytical engine would have had a memory capacity of less than 1 kilobyte of memory and a clock speed of less than 10 Hertz.
Considerable advancement in mathematics and electronics theory was required before the first modern computers could be designed.
Binary logic
In 1702, Gottfried Wilhelm Leibniz developed logic in a formal, mathematical sense with his writings on the binary numeral system. In his system, the ones and zeros also represent true and false values or on and off states. But it took more than a century before George Boole published his Boolean algebra in 1854 with a complete system that allowed computational processes to be mathematically modeled.
By this time, the first mechanical devices driven by a binary pattern had been invented. The industrial revolution had driven forward the mechanization of many tasks, and this included weaving. Punched cards controlled Joseph Marie Jacquard's loom in 1801, where a hole punched in the card indicated a binary one and an unpunched spot indicated a binary zero. Jacquard's loom was far from being a computer, but it did illustrate that machines could be driven by binary systems.
Emergence of a discipline
Charles Babbage and Ada Lovelace
Charles Babbage is often regarded as one of the first pioneers of computing. Beginning in the 1810s, Babbage had a vision of mechanically computing numbers and tables. Putting this into reality, Babbage designed a calculator to compute numbers up to 8 decimal points long. Continuing with the success of this idea, Babbage worked to develop a machine that could compute numbers with up to 20 decimal places. By the 1830s, Babbage had devised a plan to develop a machine that could use punched cards to perform arithmetical operations. The machine would store numbers in memory units, and there would be a form of sequential control. This means that one operation would be carried out before another in such a way that the machine would produce an answer and not fail. This machine was to be known as the “Analytical Engine”, which was the first true representation of what is the modern computer.
Ada Lovelace (Augusta Ada Byron) is credited as the pioneer of computer programming and is regarded as a mathematical genius. Lovelace began working with Charles Babbage as an assistant while Babbage was working on his “Analytical Engine”, the first mechanical computer. During her work with Babbage, Ada Lovelace became the designer of the first computer algorithm, which had the ability to compute Bernoulli numbers, although this is arguable as Charles was the first to design the difference engine and consequently its corresponding difference based algorithms, making him the first computer algorithm designer. Moreover, Lovelace's work with Babbage resulted in her prediction of future computers to not only perform mathematical calculations, but also manipulate symbols, mathematical or not. While she was never able to see the results of her work, as the “Analytical Engine” was not created in her lifetime, her efforts in later years, beginning in the 1840s, did not go unnoticed.
Contributions to Babbage's Analytical Engine during the first half of the 20th century
Following Babbage, although at first unaware of his earlier work, was Percy Ludgate, a clerk to a corn merchant in Dublin, Ireland. He independently designed a programmable mechanical computer, which he described in a work that was published in 1909. Two other inventors, Leonardo Torres y Quevedo and Vannevar Bush, also did follow on research based on Babbage's work. In his Essays on Automatics (1913) Torres y Quevedo designed a Babbage type of calculating machine that used electromechanical parts which included floating point number representations and built a prototype in 1920. Bush's paper Instrumental Analysis (1936) discussed using existing IBM punch card machines to implement Babbage's design. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.
Charles Sanders Peirce and electrical switching circuits
In an 1886 letter, Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. During 1880–81 he showed that NOR gates alone (or alternatively NAND gates alone) can be used to reproduce the functions of all the other logic gates, but this work on it was unpublished until 1933. The first published proof was by Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called Sheffer stroke; the logical NOR is sometimes called Peirce's arrow. Consequently, these gates are sometimes called universal logic gates.
Eventually, vacuum tubes replaced relays for logic operations. Lee De Forest's modification, in 1907, of the Fleming valve can be used as a logic gate. Ludwig Wittgenstein introduced a version of the 16-row truth table as proposition 5.101 of Tractatus Logico-Philosophicus (1921). Walther Bothe, inventor of the coincidence circuit, got part of the 1954 Nobel Prize in physics, for the first modern electronic AND gate in 1924. Konrad Zuse designed and built electromechanical logic gates for his computer Z1 (from 1935 to 1938).
Up to and during the 1930s, electrical engineers were able to build electronic circuits to solve mathematical and logic problems, but most did so in an ad hoc manner, lacking any theoretical rigor. This changed with switching circuit theory in the 1930s. From 1934 to 1936, Akira Nakashima, Claude Shannon, and Viktor Shetakov published a series of papers showing that the two-valued Boolean algebra, can describe the operation of switching circuits. This concept, of utilizing the properties of electrical switches to do logic, is the basic concept that underlies all electronic digital computers. Switching circuit theory provided the mathematical foundations and tools for digital system design in almost all areas of modern technology.
While taking an undergraduate philosophy class, Shannon had been exposed to Boole's work, and recognized that it could be used to arrange electromechanical relays (then used in telephone routing switches) to solve logic problems. His thesis became the foundation of practical digital circuit design when it became widely known among the electrical engineering community during and after World War II.
Alan Turing and the Turing machine
Before the 1920s, computers (sometimes computors) were human clerks that performed computations. They were usually under the lead of a physicist. Many thousands of computers were employed in commerce, government, and research establishments. Many of these clerks who served as human computers were women. Some performed astronomical calculations for calendars, others ballistic tables for the military.
After the 1920s, the expression computing machine referred to any machine that performed the work of a human computer, especially those in accordance with effective methods of the Church-Turing thesis. The thesis states that a mathematical method is effective if it could be set out as a list of instructions able to be followed by a human clerk with paper and pencil, for as long as necessary, and without ingenuity or insight.
Machines that computed with continuous values became known as the analog kind. They used machinery that represented continuous numeric quantities, like the angle of a shaft rotation or difference in electrical potential.
Digital machinery, in contrast to analog, were able to render a state of a numeric value and store each individual digit. Digital machinery used difference engines or relays before the invention of faster memory devices.
The phrase computing machine gradually gave way, after the late 1940s, to just computer as the onset of electronic digital machinery became common. These computers were able to perform the calculations that were performed by the previous human clerks.
Since the values stored by digital machines were not bound to physical properties like analog devices, a logical computer, based on digital equipment, was able to do anything that could be described "purely mechanical." The theoretical Turing Machine, created by Alan Turing, is a hypothetical device theorized in order to study the properties of such hardware.
The mathematical foundations of modern computer science began to be laid by Kurt Gödel with his incompleteness theorem (1931). In this theorem, he showed that there were limits to what could be proved and disproved within a formal system. This led to work by Gödel and others to define and describe these formal systems, including concepts such as mu-recursive functions and lambda-definable functions.
In 1936 Alan Turing and Alonzo Church independently, and also together, introduced the formalization of an algorithm, with limits on what can be computed, and a "purely mechanical" model for computing. This became the Church–Turing thesis, a hypothesis about the nature of mechanical calculation devices, such as electronic computers. The thesis states that any calculation that is possible can be performed by an algorithm running on a computer, provided that sufficient time and storage space are available.
In 1936, Alan Turing also published his seminal work on the Turing machines, an abstract digital computing machine which is now simply referred to as the Universal Turing machine. This machine invented the principle of the modern computer and was the birthplace of the stored program concept that almost all modern day computers use. These hypothetical machines were designed to formally determine, mathematically, what can be computed, taking into account limitations on computing ability. If a Turing machine can complete the task, it is considered Turing computable.
The Los Alamos physicist Stanley Frankel, has described John von Neumann's view of the fundamental importance of Turing's 1936 paper, in a letter:
Early computer hardware
The world's first electronic digital computer, the Atanasoff–Berry computer, was built on the Iowa State campus from 1939 through 1942 by John V. Atanasoff, a professor of physics and mathematics, and Clifford Berry, an engineering graduate student.
In 1941, Konrad Zuse developed the world's first functional program-controlled computer, the Z3. In 1998, it was shown to be Turing-complete in principle. Zuse also developed the S2 computing machine, considered the first process control computer. He founded one of the earliest computer businesses in 1941, producing the Z4, which became the world's first commercial computer. In 1946, he designed the first high-level programming language, Plankalkül.
In 1948, the Manchester Baby was completed; it was the world's first electronic digital computer that ran programs stored in its memory, like almost all modern computers. The influence on Max Newman of Turing's seminal 1936 paper on the Turing Machines and of his logico-mathematical contributions to the project, were both crucial to the successful development of the Baby.
In 1950, Britain's National Physical Laboratory completed Pilot ACE, a small scale programmable computer, based on Turing's philosophy. With an operating speed of 1 MHz, the Pilot Model ACE was for some time the fastest computer in the world. Turing's design for ACE had much in common with today's RISC architectures and it called for a high-speed memory of roughly the same capacity as an early Macintosh computer, which was enormous by the standards of his day. Had Turing's ACE been built as planned and in full, it would have been in a different league from the other early computers.
The first actual computer bug was a moth. It was stuck in between the relays on the Harvard Mark II.
While the invention of the term 'bug' is often but erroneously attributed to Grace Hopper, a future rear admiral in the U.S. Navy, who supposedly logged the "bug" on September 9, 1945, most other accounts conflict at least with these details. According to these accounts, the actual date was September 9, 1947 when operators filed this 'incident' — along with the insect and the notation "First actual case of bug being found" (see software bug for details).
Shannon and information theory
Claude Shannon went on to found the field of information theory with his 1948 paper titled A Mathematical Theory of Communication, which applied probability theory to the problem of how to best encode the information a sender wants to transmit. This work is one of the theoretical foundations for many areas of study, including data compression and cryptography.
Wiener and cybernetics
From experiments with anti-aircraft systems that interpreted radar images to detect enemy planes, Norbert Wiener coined the term cybernetics from the Greek word for "steersman." He published "Cybernetics" in 1948, which influenced artificial intelligence. Wiener also compared computation, computing machinery, memory devices, and other cognitive similarities with his analysis of brain waves.
John von Neumann and the von Neumann architecture
In 1946, a model for computer architecture was introduced and became known as Von Neumann architecture. Since 1950, the von Neumann model provided uniformity in subsequent computer designs. The von Neumann architecture was considered innovative as it introduced an idea of allowing machine instructions and data to share memory space. The von Neumann model is composed of three major parts, the arithmetic logic unit (ALU), the memory, and the instruction processing unit (IPU). In von Neumann machine design, the IPU passes addresses to memory, and memory, in turn, is routed either back to the IPU if an instruction is being fetched or to the ALU if data is being fetched.
Von Neumann's machine design uses a RISC (Reduced instruction set computing) architecture, which means the instruction set uses a total of 21 instructions to perform all tasks. (This is in contrast to CISC, complex instruction set computing, instruction sets which have more instructions from which to choose.) With von Neumann architecture, main memory along with the accumulator (the register that holds the result of logical operations) are the two memories that are addressed. Operations can be carried out as simple arithmetic (these are performed by the ALU and include addition, subtraction, multiplication and division), conditional branches (these are more commonly seen now as if statements or while loops. The branches serve as go to statements), and logical moves between the different components of the machine, i.e., a move from the accumulator to memory or vice versa. Von Neumann architecture accepts fractions and instructions as data types. Finally, as the von Neumann architecture is a simple one, its register management is also simple. The architecture uses a set of seven registers to manipulate and interpret fetched data and instructions. These registers include the "IR" (instruction register), "IBR" (instruction buffer register), "MQ" (multiplier quotient register), "MAR" (memory address register), and "MDR" (memory data register)." The architecture also uses a program counter ("PC") to keep track of where in the program the machine is.
John McCarthy, Marvin Minsky and artificial intelligence
The term artificial intelligence was credited by John McCarthy to explain the research that they were doing for a proposal for the Dartmouth Summer Research. The naming of artificial intelligence also led to the birth of a new field in computer science. On August 31, 1955, a research project was proposed consisting of John McCarthy, Marvin L. Minsky, Nathaniel Rochester, and Claude E. Shannon. The official project began in 1956 that consisted of several significant parts they felt would help them better understand artificial intelligence's makeup.
McCarthy and his colleagues' ideas behind automatic computers was while a machine is capable of completing a task, then the same should be confirmed with a computer by compiling a program to perform the desired results. They also discovered that the human brain was too complex to replicate, not by the machine itself but by the program. The knowledge to produce a program that sophisticated was not there yet.
The concept behind this was looking at how humans understand our own language and structure of how we form sentences, giving different meaning and rule sets and comparing them to a machine process. The way computers can understand is at a hardware level. This language is written in binary (1s and 0's). This has to be written in a specific format that gives the computer the ruleset to run a particular hardware piece.
Minsky's process determined how these artificial neural networks could be arranged to have similar qualities to the human brain. However, he could only produce partial results and needed to further the research into this idea.
McCarthy and Shannon's idea behind this theory was to develop a way to use complex problems to determine and measure the machine's efficiency through mathematical theory and computations. However, they were only to receive partial test results.
The idea behind self-improvement is how a machine would use self-modifying code to make itself smarter. This would allow for a machine to grow in intelligence and increase calculation speeds. The group believed they could study this if a machine could improve upon the process of completing a task in the abstractions part of their research.
The group thought that research in this category could be broken down into smaller groups. This would consist of sensory and other forms of information about artificial intelligence. Abstractions in computer science can refer to mathematics and programing language.
Their idea of computational creativity is how the program or a machine can be seen in having similar ways of human thinking. They wanted to see if a machine could take a piece of incomplete information and improve upon it to fill in the missing details as the human mind can do. If this machine could do this; they needed to think of how did the machine determine the outcome.
See also
Computer Museum
List of computer term etymologies, the origins of computer science words
List of pioneers in computer science
History of computing
History of computing hardware
History of software
History of personal computers
Timeline of algorithms
Timeline of women in computing
Timeline of computing 2020–2029
References
Sources
Further reading
Kak, Subhash : Computing Science in Ancient India; Munshiram Manoharlal Publishers Pvt. Ltd (2001)
The Development of Computer Science: A Sociocultural Perspective Matti Tedre's Ph.D. Thesis, University of Joensuu (2006)
External links
Computer History Museum
Computers: From the Past to the Present
The First "Computer Bug" at the Naval History and Heritage Command Photo Archives.
Bitsavers, an effort to capture, salvage, and archive historical computer software and manuals from minicomputers and mainframes of the 1950s, 1960s, 1970s, and 1980s
Oral history interviews
Computer science
History of computing
|
551786
|
https://en.wikipedia.org/wiki/Hacker%20Culture
|
Hacker Culture
|
Hacker Culture is a cultural criticism book written by Douglas Thomas that deals with hacker ethics and hackers.
Reception
Publishers Weekly reviewed Hacker Culture as "an intelligent and approachable book on one of the most widely discussed and least understood subcultures in recent decades."
San Francisco Chronicle reviewed Hacker Culture as "an unusually balanced history of the computer underground and its sensational representation in movies and newspapers."
References
External links
University of Minnesota Press
2002 non-fiction books
Computer books
Hacker culture
Books about computer hacking
Works about computer hacking
|
670916
|
https://en.wikipedia.org/wiki/HP%20Autonomy
|
HP Autonomy
|
HP Autonomy, previously Autonomy Corporation PLC, was an enterprise software company which was merged with Micro Focus in 2017. It was founded in Cambridge, United Kingdom in 1996.
Autonomy was acquired by Hewlett-Packard (HP) in October 2011. The deal valued Autonomy at $11.7 billion (£7.4 billion). Within a year, HP had written off $8.8 billion of Autonomy's value. HP claimed this resulted from "serious accounting improprieties" and "outright misrepresentations" by the previous management. The former CEO, Mike Lynch, alleged that the problems were due to HP's running of Autonomy.
HP recruited Robert Youngjohns, ex-Microsoft president of North America, to take over HP Autonomy in September 2012. In 2017, HP sold its Autonomy assets, as part of a wider deal, to the British software company Micro Focus.
History
Inception and expansion
Autonomy was founded in Cambridge, England by Michael Lynch, David Tabizel and Richard Gaunt in 1996 as a spin-off from Cambridge Neurodynamics, a firm specializing in computer-based finger print recognition. It used a combination of technologies born out of research at the University of Cambridge and developed a variety of enterprise search and knowledge management applications using adaptive pattern recognition techniques centered on Bayesian inference in conjunction with traditional methods. It maintained an aggressively entrepreneurial marketing approach, and sales controls described as a "rod of iron" - allegedly firing the weakest 5% of its sales force each quarter whilst cosseting the best sales staff "like rock stars".
Autonomy floated in 1998 on the NASDAQ exchange at a share price of approximately £0.30. At the height of the "dot-com bubble", the peak share price was £30.
December 2005: Autonomy acquired Verity, Inc., one of its main competitors, for approximately US$500 million. In 2005 Autonomy also acquired Neurodynamics.
May 2007: After exercising an option to buy a stake in technology start up Blinkx Inc, and combining it with its consumer division, Autonomy floated Blinkx on a valuation of $250 million.
July 2007: Autonomy acquired Zantaz, an email archiving and litigation support company, for $375 million.
October 2007: Autonomy acquired Meridio Holdings Ltd, a UK company based in Northern Ireland that specialised in Records Management software, for £20 million.
28 May 2008: Kainos extended its partnership with Autonomy for high-end information processing and Information Risk Management (IRM) to deliver information governance solutions to its customer base.
January 2009: Autonomy acquired Interwoven, a niche provider of enterprise content management software, for $775 million. Interwoven became Autonomy Interwoven and Autonomy iManage.
In 2009 Paul Morland, a leading analyst, started raising concerns about Autonomy's exaggerated performance claims.
June 2010: Autonomy announced that it was to acquire the Information Governance business of CA Technologies. Terms of the sale were not disclosed.
5 May 2011: The Mercedes Formula One team announced an $8 million sponsorship deal with Autonomy, and on 8 July 2010 Tottenham Hotspur FC announced a two-year sponsorship deal with Autonomy for their Premier League kit. For the 2011–12 season Spurs' Premier League shirt featured Autonomy's Augmented Reality technology Aurasma.
16 May 2011: Autonomy acquired Iron Mountain Digital, a pioneer in E-discovery and online backup solutions provider, for $380 million from Iron Mountain Incorporated.
Hewlett-Packard
18 August 2011: Hewlett-Packard announced that it would purchase Autonomy for US$42.11 per share with a premium of around 79% over market price that was widely criticized as "absurdly high", a "botched strategy shift" and a "chaotic" attempt to rapidly reposition HP and enhance earnings by expanding the high-margin software services sector. The transaction was unanimously approved by the boards of directors of both HP and Autonomy and the Autonomy board recommended that its shareholders accept the offer. On 3 October 2011 HP closed the deal, announcing that it had acquired around 87.3% of the shares for around $10.2 billion, and valuing the company at around $11.7 billion in total.
May 2012: Mike Lynch left his role as Autonomy CEO after a significant drop in revenue in the previous quarter.
September 2012: Robert Youngjohns was appointed SVP & GM of Autonomy/Information Management Business Unit.
November 2012: Hewlett-Packard announced that it was taking an $8.8 billion accounting charge after claiming "serious accounting improprieties" and "outright misrepresentations" at Autonomy; its share price fell to a decades' low on the news. Previous management in turn accused HP of a "textbook example of defensive stalling" to conceal evidence of its own prior knowledge and gross mismanagement and undermining of the company, noting public awareness since 2009 of its financial reporting issues and that even HP's CFO disagreed with the price paid. External observers stated that only a small part of the write-off appeared to be due to accounting mis-statements, and that HP had overpaid for businesses previously. Lynch alleged that the problems were due to HP's running of Autonomy, citing "internecine warfare" within the organization. Major culture clashes had been reported in the press.
The Serious Fraud Office (United Kingdom), and the U.S. Securities and Exchange Commission joined the FBI in investigating the potential anomalies. However, in January 2015 the SFO closed its investigation as the chance of successful prosecution was low.
Three lawsuits were brought by shareholders against HP, for the fall in value of HP shares. In August 2014 a United States district court judge threw out a proposed settlement involving a fee of up to $48 million: Autonomy's previous management had argued the settlement would be collusive and was intended to divert scrutiny of HP executives' own responsibility and knowledge.
November 2013: the HP Exstream customer communication management (CCM) business, formerly part of the HP LaserJet and Enterprise Solutions (LES) business, joined the HP Autonomy organization.
30 January 2014: the company announced that one of its partners, Kainos, had integrated HP IDOL 10.5, the new version of HP Autonomy's information analytics engine, into Kainos's electronic medical record platform, Evolve.
31 October 2015: Autonomy's software products were divided between HP Inc (HPQ) and Hewlett Packard Enterprise (HPE) as a result of the Hewlett-Packard Company separation. HP Inc was assigned ownership largely consisting of Autonomy's content management software components including TeamSite, Qfiniti, Qfiniti Managed Services, MediaBin, Optimost, and Explore. Hewlett Packard Enterprise retained ownership of the remaining software.
2 May 2016: OpenText acquired HP TeamSite, HP MediaBin, HP Qfiniti, HP Explore, HP Aurasma, and HP Optimost from HP Inc for $170 million.
In 2017, HP sold its Autonomy assets, as part of a wider deal valued at $8.8 billion, to the British software company Micro Focus.
In April 2018 Autonomy's ex-CFO Sushovan Hussain was charged in the US and found guilty in of accounting fraud, and subsequently allowed out on bail after his appeal raised a "substantial question over his conviction." Hussain's appeal failed in August 2020.
Based on Hussain's evidence, Lynch was charged with fraud in November 2018. Lynch said he would contest extradition and that he "vigorously rejects all the allegations against him."
In March 2019, HP brought a civil action in the UK courts. The case was heard in a trial lasting 93 days, with Lynch present in the witness box for 22 days, making it one of the longest cross-examinations in British legal history. In January 2022, the High Court in London ruled that HP had "substantially won" its civil case against Lynch and Hussain in which HP claimed that the two individuals had "artificially inflated Autonomy's reported revenues, revenue growth and gross margins".
In September 2020, Deloitte, who audited Autonomy between 2009 and 2011, were fined £15m for its audits that contained “serious and serial failures”.
Products and services
HP Autonomy products include Intelligent Data Operating Layer (IDOL), which allows for search and processing of text taken from both structured data and unstructured human information—including e-mail and mobile data—whether it originates in a database, audio, video, text files or streams. The processing of such information by IDOL is referred to by Autonomy as Meaning-Based Computing.
HP Autonomy's offerings include:
Marketing Optimization
Web Experience Management, Web Optimization, Search Engine Marketing, Marketing Analytics, Contact Center Management, Rich Media Management
Information Analytics
Voice of the Customer, Media Intelligence, Video Surveillance, Big Data Analytics, SFA Intelligence
Unified Information Access
Enterprise Search, Knowledge Management, Content Access & Extraction
Information Archiving
Compliance Archiving, Litigation Readiness Archiving, Storage Optimization Archiving, Database & Application Archiving, Supervision & Policy Management
eDiscovery
Legal Hold, Early Case Assessment, Review & Analytics, Investigations, Post-Review
Enterprise Content Management
Policy-driven Information Management, Records Management, Legal Content Management, Business Process Management, Document and Email Management
Data Protection
Server Data Protection, Virtual Server Data Protection, Remote & Branch Office Data Protection, Endpoint Device Data Protection
Customer Communications Management
Healthcare Communications, Transactional Communications, State, Local & Federal Communications, Utility & Smart Meter Communications, High Volume Communications
Automated Information Capture
Multichannel automated information capture:, Intelligent document recognition, Intelligent document classification, Remote capture, Validation
Haven OnDemand
The API platform for building data rich applications. Haven OnDemand features a wide range of APIs for indexing and performing analytics on a range of information from plain text and office documents through to audio and video.
Haven Search OnDemand
A easy to use and quick to deploy Enterprise Search solution built onto of the Haven OnDemand API Platform.
Offices
The Autonomy business has primary offices in Cambridge and Sunnyvale, California, as well as other major offices in the UK, the US, Canada, France, Japan, Australia, Singapore, Germany, and smaller offices in India and throughout Europe and Latin America.
See also
List of enterprise search vendors
References
External links
"The Quest for Meaning" WIRED
Michael Lynch On Meaning-Based Computing
Accounting scandals
Hewlett-Packard acquisitions
Cloud computing providers
Defunct software companies of the United Kingdom
Software companies established in 1996
1996 establishments in England
Companies based in Cambridge
2011 mergers and acquisitions
2017 mergers and acquisitions
Micro Focus International
Enterprise search
|
24462958
|
https://en.wikipedia.org/wiki/Risk
|
Risk
|
In simple terms, risk is the possibility of something bad happening. Risk involves uncertainty about the effects/implications of an activity with respect to something that humans value (such as health, well-being, wealth, property or the environment), often focusing on negative, undesirable consequences. Many different definitions have been proposed. The international standard definition of risk for common understanding in different applications is “effect of uncertainty on objectives”.
The understanding of risk, the methods of assessment and management, the descriptions of risk and even the definitions of risk differ in different practice areas (business, economics, environment, finance, information technology, health, insurance, safety, security etc). This article provides links to more detailed articles on these areas. The international standard for risk management, ISO 31000, provides principles and generic guidelines on managing risks faced by organizations.
Definitions of risk
Oxford English Dictionary
The Oxford English Dictionary (OED) cites the earliest use of the word in English (in the spelling of risque from its French original, 'risque') as of 1621, and the spelling as risk from 1655. While including several other definitions, the OED 3rd edition defines risk as:
(Exposure to) the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility.
The Cambridge Advanced Learner’s Dictionary gives a simple summary, defining risk as “the possibility of something bad happening”.
International Organization for Standardization
The International Organization for Standardization (ISO) Guide 73 provides basic vocabulary to develop common understanding on risk management concepts and terms across different applications. ISO Guide 73:2009 defines risk as:
effect of uncertainty on objectives
Note 1: An effect is a deviation from the expected – positive or negative.
Note 2: Objectives can have different aspects (such as financial, health and safety, and environmental goals) and can apply at different levels (such as strategic, organization-wide, project, product and process).
Note 3: Risk is often characterized by reference to potential events and consequences or a combination of these.
Note 4: Risk is often expressed in terms of a combination of the consequences of an event (including changes in circumstances) and the associated likelihood of occurrence.
Note 5: Uncertainty is the state, even partial, of deficiency of information related to, understanding or knowledge of, an event, its consequence, or likelihood.
This definition was developed by an international committee representing over 30 countries and is based on the input of several thousand subject matter experts. It was first adopted in 2002. Its complexity reflects the difficulty of satisfying fields that use the term risk in different ways. Some restrict the term to negative impacts (“downside risks”), while others include positive impacts (“upside risks”).
ISO 31000:2018 “Risk management — Guidelines” uses the same definition with a simpler set of notes.
Other
Many other definitions of risk have been influential:
“Source of harm”. The earliest use of the word “risk” was as a synonym for the much older word “hazard”, meaning a potential source of harm. This definition comes from Blount’s “Glossographia” (1661) and was the main definition in the OED 1st (1914) and 2nd (1989) editions. Modern equivalents refer to “unwanted events” or “something bad that might happen”.
“Chance of harm”. This definition comes from Johnson’s “Dictionary of the English Language” (1755), and has been widely paraphrased, including “possibility of loss” or “probability of unwanted events”.
“Uncertainty about loss”. This definition comes from Willett’s “Economic Theory of Risk and Insurance” (1901). This links “risk” to “uncertainty”, which is a broader term than chance or probability.
“Measurable uncertainty”. This definition comes from Knight’s “Risk, Uncertainty and Profit” (1921). It allows “risk” to be used equally for positive and negative outcomes. In insurance, risk involves situations with unknown outcomes but known probability distributions.
“Volatility of return”. Equivalence between risk and variance of return was first identified in Markovitz’s “Portfolio Selection” (1952). In finance, volatility of return is often equated to risk.
“Statistically expected loss”. The expected value of loss was used to define risk by Wald (1939) in what is now known as decision theory. The probability of an event multiplied by its magnitude was proposed as a definition of risk for the planning of the Delta Works in 1953, a flood protection program in the Netherlands. It was adopted by the US Nuclear Regulatory Commission (1975), and remains widely used.
“Likelihood and severity of events”. The “triplet” definition of risk as “scenarios, probabilities and consequences” was proposed by Kaplan & Garrick (1981). Many definitions refer to the likelihood/probability of events/effects/losses of different severity/consequence, e.g. ISO Guide 73 Note 4.
“Consequences and associated uncertainty”. This was proposed by Kaplan & Garrick (1981). This definition is preferred in Bayesian analysis, which sees risk as the combination of events and uncertainties about them.
“Uncertain events affecting objectives”. This definition was adopted by the Association for Project Management (1997). With slight rewording it became the definition in ISO Guide 73.
“Uncertainty of outcome”. This definition was adopted by the UK Cabinet Office (2002) to encourage innovation to improve public services. It allowed “risk” to describe either “positive opportunity or negative threat of actions and events”.
“Asset, threat and vulnerability”. This definition comes from the Threat Analysis Group (2010) in the context of computer security.
“Human interaction with uncertainty”. This definition comes from Cline (2015) in the context of adventure education.
Some resolve these differences by arguing that the definition of risk is subjective. For example:
No definition is advanced as the correct one, because there is no one definition that is suitable for all problems. Rather, the choice of definition is a political one, expressing someone’s views regarding the importance of different adverse effects in a particular situation.
The Society for Risk Analysis concludes that “experience has shown that to agree on one unified set of definitions is not realistic”. The solution is “to allow for different perspectives on fundamental concepts and make a distinction between overall qualitative definitions and their associated measurements.”
Practice areas
The understanding of risk, the common methods of management, the measurements of risk and even the definition of risk differ in different practice areas. This section provides links to more detailed articles on these areas.
Business risk
Business risks arise from uncertainty about the profit of a commercial business due to unwanted events such as changes in tastes, changing preferences of consumers, strikes, increased competition, changes in government policy, obsolescence etc.
Business risks are controlled using techniques of risk management. In many cases they may be managed by intuitive steps to prevent or mitigate risks, by following regulations or standards of good practice, or by insurance. Enterprise risk management includes the methods and processes used by organizations to manage risks and seize opportunities related to the achievement of their objectives.
Economic risk
Economics is concerned with the production, distribution and consumption of goods and services. Economic risk arises from uncertainty about economic outcomes. For example, economic risk may be the chance that macroeconomic conditions like exchange rates, government regulation, or political stability will affect an investment or a company’s prospects.
In economics, as in finance, risk is often defined as quantifiable uncertainty about gains and losses.
Environmental risk
Environmental risk arises from environmental hazards or environmental issues.
In the environmental context, risk is defined as “The chance of harmful effects to human health or to ecological systems”.
Environmental risk assessment aims to assess the effects of stressors, often chemicals, on the local environment.
Financial risk
Finance is concerned with money management and acquiring funds. Financial risk arises from uncertainty about financial returns. It includes market risk, credit risk, liquidity risk and operational risk.
In finance, risk is the possibility that the actual return on an investment will be different from its expected return. This includes not only "downside risk" (returns below expectations, including the possibility of losing some or all of the original investment) but also "upside risk" (returns that exceed expectations). In Knight’s definition, risk is often defined as quantifiable uncertainty about gains and losses. This contrasts with Knightian uncertainty, which cannot be quantified.
Financial risk modeling determines the aggregate risk in a financial portfolio. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. More recent risk measures include value at risk.
Because investors are generally risk averse, investments with greater inherent risk must promise higher expected returns.
Financial risk management uses financial instruments to manage exposure to risk. It includes the use of a hedge to offset risks by adopting a position in an opposing market or investment.
In financial audit, audit risk refers to the potential that an audit report may fail to detect material misstatement either due to error or fraud.
Health risk
Health risks arise from disease and other biological hazards.
Epidemiology is the study and analysis of the distribution, patterns and determinants of health and disease. It is a cornerstone of public health, and shapes policy decisions by identifying risk factors for disease and targets for preventive healthcare.
In the context of public health, risk assessment is the process of characterizing the nature and likelihood of a harmful effect to individuals or populations from certain human activities. Health risk assessment can be mostly qualitative or can include statistical estimates of probabilities for specific populations.
A health risk assessment (also referred to as a health risk appraisal and health & well-being assessment) is a questionnaire screening tool, used to provide individuals with an evaluation of their health risks and quality of life
Health, safety, and environment risks
Health, safety, and environment (HSE) are separate practice areas; however, they are often linked. The reason is typically to do with organizational management structures; however, there are strong links among these disciplines. One of the strongest links is that a single risk event may have impacts in all three areas, albeit over differing timescales. For example, the uncontrolled release of radiation or a toxic chemical may have immediate short-term safety consequences, more protracted health impacts, and much longer-term environmental impacts. Events such as Chernobyl, for example, caused immediate deaths, and in the longer term, deaths from cancers, and left a lasting environmental impact leading to birth defects, impacts on wildlife, etc.
Information technology risk
Information technology (IT) is the use of computers to store, retrieve, transmit, and manipulate data. IT risk (or cyber risk) arises from the potential that a threat may exploit a vulnerability to breach security and cause harm. IT risk management applies risk management methods to IT to manage IT risks. Computer security is the protection of IT systems by managing IT risks.
Information security is the practice of protecting information by mitigating information risks. While IT risk is narrowly focused on computer security, information risks extend to other forms of information (paper, microfilm).
Insurance risk
Insurance is a risk treatment option which involves risk sharing. It can be considered as a form of contingent capital and is akin to purchasing an option in which the buyer pays a small premium to be protected from a potential large loss.
Insurance risk is often taken by insurance companies, who then bear a pool of risks including market risk, credit risk, operational risk, interest rate risk, mortality risk, longevity risks, etc.
The term “risk” has a long history in insurance and has acquired several specialised definitions, including “the subject-matter of an insurance contract”, “an insured peril” as well as the more common “possibility of an event occurring which causes injury or loss”.
Occupational risk
Occupational health and safety is concerned with occupational hazards experienced in the workplace.
The Occupational Health and Safety Assessment Series (OHSAS) standard OHSAS 18001 in 1999 defined risk as the “combination of the likelihood and consequence(s) of a specified hazardous event occurring”. In 2018 this was replaced by ISO 45001 “Occupational health and safety management systems”, which use the ISO Guide 73 definition.
Project risk
A project is an individual or collaborative undertaking planned to achieve a specific aim. Project risk is defined as, "an uncertain event or condition that, if it occurs, has a positive or negative effect on a project’s objectives”. Project risk management aims to increase the likelihood and impact of positive events and decrease the likelihood and impact of negative events in the project.
Safety risk
Safety is concerned with a variety of hazards that may result in accidents causing harm to people, property and the environment. In the safety field, risk is typically defined as the “likelihood and severity of hazardous events”. Safety risks are controlled using techniques of risk management.
A high reliability organisation (HRO) involves complex operations in environments where catastrophic accidents could occur. Examples include aircraft carriers, air traffic control, aerospace and nuclear power stations. Some HROs manage risk in a highly quantified way. The technique is usually referred to as Probabilistic Risk Assessment (PRA). See WASH-1400 for an example of this approach. The incidence rate can also be reduced due to the provision of better occupational health and safety programmes
Security risk
Security is freedom from, or resilience against, potential harm caused by others.
A security risk is "any event that could result in the compromise of organizational assets i.e. the unauthorized use, loss, damage, disclosure or modification of organizational assets for the profit, personal interest or political interests of individuals, groups or other entities."
Security risk management involves protection of assets from harm caused by deliberate acts.
Assessment and management of risk
Risk management
Risk is ubiquitous in all areas of life and we all manage these risks, consciously or intuitively, whether we are managing a large organization or simply crossing the road. Intuitive risk management is addressed under the psychology of risk below.
Risk management refers to a systematic approach to managing risks, and sometimes to the profession that does this. A general definition is that risk management consists of “coordinated activities to direct and control an organization with regard to risk".
ISO 31000, the international standard for risk management, describes a risk management process that consists of the following elements:
Communicating and consulting
Establishing the scope, context and criteria
Risk assessment - recognising and characterising risks, and evaluating their significance to support decision-making. This includes risk identification, risk analysis and risk evaluation.
Risk treatment - selecting and implementing options for addressing risk.
Monitoring and reviewing
Recording and reporting
In general, the aim of risk management is to assist organizations in “setting strategy, achieving objectives and making informed decisions”. The outcomes should be “scientifically sound, cost-effective, integrated actions that [treat] risks while taking into account social, cultural, ethical, political, and legal considerations”.
In contexts where risks are always harmful, risk management aims to “reduce or prevent risks”. In the safety field it aims “to protect employees, the general public, the environment, and company assets, while avoiding business interruptions”.
For organizations whose definition of risk includes “upside” as well as “downside” risks, risk management is “as much about identifying opportunities as avoiding or mitigating losses”. It then involves “getting the right balance between innovation and change on the one hand, and avoidance of shocks and crises on the other”.
Risk assessment
Risk assessment is a systematic approach to recognising and characterising risks, and evaluating their significance, in order to support decisions about how to manage them. ISO 31000 defines it in terms of its components as “the overall process of risk identification, risk analysis and risk evaluation”.
Risk assessment can be qualitative, semi-quantitative or quantitative:
Qualitative approaches are based on qualitative descriptions of risks and rely on judgement to evaluate their significance.
Semi-quantitative approaches use numerical rating scales to group the consequences and probabilities of events into bands such as “high”, “medium” and “low”. They may use a risk matrix to evaluate the significance of particular combinations of probability and consequence.
Quantitative approaches, including Quantitative risk assessment (QRA) and probabilistic risk assessment (PRA), estimate probabilities and consequences in appropriate units, combine them into risk metrics, and evaluate them using numerical risk criteria.
The specific steps vary widely in different practice areas.
Risk identification
Risk identification is “the process of finding, recognizing and recording risks”. It “involves the identification of risk sources, events, their causes and their potential consequences.”
ISO 31000 describes it as the first step in a risk assessment process, preceding risk analysis and risk evaluation. In safety contexts, where risk sources are known as hazards, this step is known as “hazard identification”.
There are many different methods for identifying risks, including:
Checklists or taxonomies based on past data or theoretical models.
Evidence-based methods, such as literature reviews and analysis of historical data.
Team-based methods that systematically consider possible deviations from normal operations, e.g. HAZOP, FMEA and SWIFT.
Empirical methods, such as testing and modelling to identify what might happen under particular circumstances.
Techniques encouraging imaginative thinking about possibilities of the future, such as scenario analysis.
Expert-elicitation methods such as brainstorming, interviews and audits.
Sometimes, risk identification methods are limited to finding and documenting risks that are to be analysed and evaluated elsewhere. However, many risk identification methods also consider whether control measures are sufficient and recommend improvements. Hence they function as stand-alone qualitative risk assessment techniques.
Risk analysis
Risk analysis is about developing an understanding of the risk. ISO defines it as “the process to comprehend the nature of risk and to determine the level of risk”. In the ISO 31000 risk assessment process, risk analysis follows risk identification and precedes risk evaluation. However, these distinctions are not always followed.
Risk analysis may include:
Determining the sources, causes and drivers of risk
Investigating the effectiveness of existing controls
Analysing possible consequences and their likelihood
Understanding interactions and dependencies between risks
Determining measures of risk
Verifying and validating results
Uncertainty and sensitivity analysis
Risk analysis often uses data on the probabilities and consequences of previous events. Where there have been few such events, or in the context of systems that are not yet operational and therefore have no previous experience, various analytical methods may be used to estimate the probabilities and consequences:
Proxy or analogue data from other contexts, presumed to be similar in some aspects of risk.
Theoretical models, such as Monte Carlo simulation and Quantitative risk assessment software.
Logical models, such as Bayesian networks, fault tree analysis and event tree analysis
Expert judgement, such as absolute probability judgement or the Delphi method.
Risk evaluation and risk criteria
Risk evaluation involves comparing estimated levels of risk against risk criteria to determine the significance of the risk and make decisions about risk treatment actions.
In most activities, risks can be reduced by adding further controls or other treatment options, but typically this increases cost or inconvenience. It is rarely possible to eliminate risks altogether without discontinuing the activity. Sometimes it is desirable to increase risks to secure valued benefits. Risk criteria are intended to guide decisions on these issues.
Types of criteria include:
Criteria that define the level of risk that can be accepted in pursuit of objectives, sometimes known as risk appetite, and evaluated by risk/reward analysis.
Criteria that determine whether further controls are needed, such as benefit-cost ratio.
Criteria that decide between different risk management options, such as multiple-criteria decision analysis.
The simplest framework for risk criteria is a single level which divides acceptable risks from those that need treatment. This gives attractively simple results but does not reflect the uncertainties involved both in estimating risks and in defining the criteria.
The tolerability of risk framework, developed by the UK Health and Safety Executive, divides risks into three bands:
Unacceptable risks – only permitted in exceptional circumstances.
Tolerable risks – to be kept as low as reasonably practicable (ALARP), taking into account the costs and benefits of further risk reduction.
Broadly acceptable risks – not normally requiring further reduction.
Descriptions of risk
There are many different risk metrics that can be used to describe or “measure” risk.
Triplets
Risk is often considered to be a set of triplets (also described as a vector):
for i = 1,2,....,N
where:
is a scenario describing a possible event
is the probability of the scenario
is the consequence of the scenario
is the number of scenarios chosen to describe the risk
These are the answers to the three fundamental questions asked by a risk analysis:
What can happen?
How likely is it to happen?
If it does happen, what would the consequences be?
Risks expressed in this way can be shown in a table or risk register. They may be quantitative or qualitative, and can include positive as well as negative consequences.
The scenarios can be plotted in a consequence/likelihood matrix (or risk matrix). These typically divide consequences and likelihoods into 3 to 5 bands. Different scales can be used for different types of consequences (e.g. finance, safety, environment etc.), and can include positive as well as negative consequences.
An updated version recommends the following general description of risk:
where:
is an event that might occur
is the consequences of the event
is an assessment of uncertainties
is a knowledge-based probability of the event
is the background knowledge that U and P are based on
Probability distributions
If all the consequences are expressed in the same units (or can be converted into a consistent loss function), the risk can be expressed as a probability density function describing the “uncertainty about outcome”:
This can also be expressed as a cumulative distribution function (CDF) (or S curve).
One way of highlighting the tail of this distribution is by showing the probability of exceeding given losses, known as a complementary cumulative distribution function, plotted on logarithmic scales. Examples include frequency-number (FN) diagrams, showing the annual frequency of exceeding given numbers of fatalities.
A simple way of summarising the size of the distribution’s tail is the loss with a certain probability of exceedance, such as the Value at Risk.
Expected values
Risk is often measured as the expected value of the loss. This combines the probabilities and consequences into a single value. See also Expected utility. The simplest case is a binary possibility of Accident or No accident. The associated formula for calculating risk is then:
For example, if there is a probability of 0.01 of suffering an accident with a loss of $1000, then total risk is a loss of $10, the product of 0.01 and $1000.
In a situation with several possible accident scenarios, total risk is the sum of the risks for each scenario, provided that the outcomes are comparable:
(terms defined above)
In statistical decision theory, the risk function is defined as the expected value of a given loss function as a function of the decision rule used to make decisions in the face of uncertainty.
A disadvantage of defining risk as the product of impact and probability is that it presumes, unrealistically, that decision-makers are risk-neutral. A risk-neutral person's utility is proportional to the expected value of the payoff. For example, a risk-neutral person would consider 20% chance of winning $1 million exactly as desirable as getting a certain $200,000. However, most decision-makers are not actually risk-neutral and would not consider these equivalent choices.
Volatility
In finance, volatility is the degree of variation of a trading price over time, usually measured by the standard deviation of logarithmic returns. Modern portfolio theory measures risk using the variance (or standard deviation) of asset prices. The risk is then:
The beta coefficient measures the volatility of an individual asset to overall market changes. This is the asset’s contribution to systematic risk, which cannot be eliminated by portfolio diversification. It is the covariance between the asset’s return ri and the market return rm, expressed as a fraction of the market variance:
Outcome frequencies
Risks of discrete events such as accidents are often measured as outcome frequencies, or expected rates of specific loss events per unit time. When small, frequencies are numerically similar to probabilities, but have dimensions of [1/time] and can sum to more than 1. Typical outcomes expressed this way include:
Individual risk - the frequency of a given level of harm to an individual. It often refers to the expected annual probability of death. Where risk criteria refer to the individual risk, the risk assessment must use this metric.
Group (or societal risk) – the relationship between the frequency and the number of people suffering harm.
Frequencies of property damage or total loss.
Frequencies of environmental damage such as oil spills.
Relative risk
In health, the relative risk is the ratio of the probability of an outcome in an exposed group to the probability of an outcome in an unexposed group.
Psychology of risk
Fear as intuitive risk assessment
People may rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances. Fear is a response to perceived danger. Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience.
The field of behavioural finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset. Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that presume rationality but in fact merely fuse many shared biases.
Fear, anxiety and risk
According to one set of definitions, fear is a fleeting emotion ascribed to a particular object, while anxiety is a trait of fear (this is referring to "trait anxiety", as distinct from how the term "anxiety" is generally used) that lasts longer and is not attributed to a specific stimulus (these particular definitions are not used by all authors cited on this page). Some studies show a link between anxious behaviour and risk (the chance that an outcome will have an unfavorable result). Joseph Forgas introduced valence based research where emotions are grouped as either positive or negative (Lerner and Keltner, 2000). Positive emotions, such as happiness, are believed to have more optimistic risk assessments and negative emotions, such as anger, have pessimistic risk assessments. As an emotion with a negative valence, fear, and therefore anxiety, has long been associated with negative risk perceptions. Under the more recent appraisal tendency framework of Jennifer Lerner et al., which refutes Forgas' notion of valence and promotes the idea that specific emotions have distinctive influences on judgments, fear is still related to pessimistic expectations.
Psychologists have demonstrated that increases in anxiety and increases in risk perception are related and people who are habituated to anxiety experience this awareness of risk more intensely than normal individuals. In decision-making, anxiety promotes the use of biases and quick thinking to evaluate risk. This is referred to as affect-as-information according to Clore, 1983. However, the accuracy of these risk perceptions when making choices is not known.
Consequences of anxiety
Experimental studies show that brief surges in anxiety are correlated with surges in general risk perception. Anxiety exists when the presence of threat is perceived (Maner and Schmidt, 2006). As risk perception increases, it stays related to the particular source impacting the mood change as opposed to spreading to unrelated risk factors. This increased awareness of a threat is significantly more emphasised in people who are conditioned to anxiety. For example, anxious individuals who are predisposed to generating reasons for negative results tend to exhibit pessimism. Also, findings suggest that the perception of a lack of control and a lower inclination to participate in risky decision-making (across various behavioural circumstances) is associated with individuals experiencing relatively high levels of trait anxiety. In the previous instance, there is supporting clinical research that links emotional evaluation (of control), the anxiety that is felt and the option of risk avoidance.
There are various views presented that anxious/fearful emotions cause people to access involuntary responses and judgments when making decisions that involve risk. Joshua A. Hemmerich et al. probes deeper into anxiety and its impact on choices by exploring "risk-as-feelings" which are quick, automatic, and natural reactions to danger that are based on emotions. This notion is supported by an experiment that engages physicians in a simulated perilous surgical procedure. It was demonstrated that a measurable amount of the participants' anxiety about patient outcomes was related to previous (experimentally created) regret and worry and ultimately caused the physicians to be led by their feelings over any information or guidelines provided during the mock surgery. Additionally, their emotional levels, adjusted along with the simulated patient status, suggest that anxiety level and the respective decision made are correlated with the type of bad outcome that was experienced in the earlier part of the experiment. Similarly, another view of anxiety and decision-making is dispositional anxiety where emotional states, or moods, are cognitive and provide information about future pitfalls and rewards (Maner and Schmidt, 2006). When experiencing anxiety, individuals draw from personal judgments referred to as pessimistic outcome appraisals. These emotions promote biases for risk avoidance and promote risk tolerance in decision-making.
Dread risk
It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences, threatening to kill a large number of people within a short period of time. For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.
Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks. Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history. Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people, a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy. Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.
Anxiety and judgmental accuracy
The relationship between higher levels of risk perception and "judgmental accuracy" in anxious individuals remains unclear (Joseph I. Constans, 2001). There is a chance that "judgmental accuracy" is correlated with heightened anxiety. Constans conducted a study to examine how worry propensity (and current mood and trait anxiety) might influence college student's estimation of their performance on an upcoming exam, and the study found that worry propensity predicted subjective risk bias (errors in their risk assessments), even after variance attributable to current mood and trait anxiety had been removed. Another experiment suggests that trait anxiety is associated with pessimistic risk appraisals (heightened perceptions of the probability and degree of suffering associated with a negative experience), while controlling for depression.
Human factors
One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?"
In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion(preferring the status quo in case one becomes worse off).
Framing is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident.
For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science.
All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest.
Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective while greater left prefrontal activity relates to local or focal processing.
From the Theory of Leaky Modules McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening. The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done.
Psychology of risk taking
A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking. Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.
Other considerations
Risk and uncertainty
In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty.
Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable.
Another distinction between risk and uncertainty is proposed by Douglas Hubbard:
Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known.
Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60% chance this market will double in five years"
Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome.
Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs".
In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes.
Mild Versus Wild Risk
Benoit Mandelbrot distinguished between "mild" and "wild" risk and argued that risk assessment and analysis must be fundamentally different for the two types of risk. Mild risk follows normal or near-normal probability distributions, is subject to regression to the mean and the law of large numbers, and is therefore relatively predictable. Wild risk follows fat-tailed distributions, e.g., Pareto or power-law distributions, is subject to regression to the tail (infinite mean or variance, rendering the law of large numbers invalid or ineffective), and is therefore difficult or impossible to predict. A common error in risk assessment and analysis is to underestimate the wildness of risk, assuming risk to be mild when in fact it is wild, which must be avoided if risk assessment and analysis are to be valid and reliable, according to Mandelbrot.
Risk attitude, appetite and tolerance
The terms risk attitude, appetite, and tolerance are often used similarly to describe an organisation's or individual's attitude towards risk-taking. One's attitude may be described as risk-averse, risk-neutral, or risk-seeking. Risk tolerance looks at acceptable/unacceptable deviations from what is expected. Risk appetite looks at how much risk one is willing to accept. There can still be deviations that are within a risk appetite. For example, recent research finds that insured individuals are significantly likely to divest from risky asset holdings in response to a decline in health, controlling for variables such as income, age, and out-of-pocket medical expenses.
Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with possibly higher gain. The possibility of getting no return on an investment is also known as the rate of ruin.
Risk compensation is a theory which suggests that people typically adjust their behavior in response to the perceived level of risk, becoming more careful where they sense greater risk and less careful if they feel more protected. By way of example, it has been observed that motorists drove faster when wearing seatbelts and closer to the vehicle in front when the vehicles were fitted with anti-lock brakes.
Risk and autonomy
The experience of many people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining further independence or fully accessing the community, and that these services are often unnecessarily risk averse. "People's autonomy used to be compromised by institution walls, now it's too often our risk management practices", according to John O'Brien. Michael Fischer and Ewan Ferlie (2013) find that contradictions between formal risk controls and the role of subjective factors in human services (such as the role of emotions and ideology) can undermine service values, so producing tensions and even intractable and 'heated' conflict.
List of related books
This is a list of books about risk issues.
See also
Ambiguity aversion
Audit risk
Benefit shortfall
Civil defence
Countermeasure
Early case assessment
External risk
Enterprise risk
Event chain methodology
Financial risk
Fuel price risk management
Global catastrophic risk
Hazard (risk)
Identity resolution
Information assurance
Inherent risk
Inherent risk (accounting)
International Risk Governance Council
ISO/PAS 28000
IT risk
Legal risk
Life-critical system
Liquidity risk
Loss aversion
Moral hazard
Operational risk
Preventive maintenance
Probabilistic risk assessment
Process risk
Reputational risk
Reliability engineering
Risk analysis
Risk assessment
Risk compensation
Peltzman effect
Risk management
Risk-neutral measure
Risk perception
Risk register
Sampling risk
Systemic risk
Systematic risk
Uncertainty
Vulnerability
References
Bibliography
Referred literature
James Franklin, 2001: The Science of Conjecture: Evidence and Probability Before Pascal, Baltimore: Johns Hopkins University Press.
Niklas Luhmann, 1996: Modern Society Shocked by its Risks (= University of Hong Kong, Department of Sociology Occasional Papers 17), Hong Kong, available via HKU Scholars HUB
Books
Historian David A. Moss' book When All Else Fails explains the US government's historical role as risk manager of last resort.
Bernstein P. L. Against the Gods . Risk explained and its appreciation by man traced from earliest times through all the major figures of their ages in mathematical circles.
Gardner D. Risk: The Science and Politics of Fear, Random House Inc. (2008) .
Novak S.Y. Extreme value methods with applications to finance. London: CRC. (2011) .
Hopkin P. Fundamentals of Risk Management. 2nd Edition. Kogan-Page (2012)
Articles and papers
Hansson, Sven Ove. (2007). "Risk", The Stanford Encyclopedia of Philosophy (Summer 2007 Edition), Edward N. Zalta (ed.), forthcoming .
Holton, Glyn A. (2004). "Defining Risk", Financial Analysts Journal, 60 (6), 19–25. A paper exploring the foundations of risk. (PDF file).
Knight, F. H. (1921) Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company. (Cited at: , § I.I.26.).
Kruger, Daniel J., Wang, X.T., & Wilke, Andreas (2007) "Towards the development of an evolutionarily valid domain-specific risk-taking scale" Evolutionary Psychology (PDF file).
Neill, M. Allen, J. Woodhead, N. Reid, S. Irwin, L. Sanderson, H. 2008 "A Positive Approach to Risk Requires Person Centred Thinking" London, CSIP Personalisation Network, Department of Health. Available from: https://web.archive.org/web/20090218231745/http://networks.csip.org.uk/Personalisation/Topics/Browse/Risk/ [Accessed 21 July 2008].
External links
Risk – The entry of the Stanford Encyclopedia of Philosophy
Actuarial science
Environmental social science concepts
|
21641559
|
https://en.wikipedia.org/wiki/Workspace.com
|
Workspace.com
|
Workspace.com is a provider of an online collaborative workspace for information technology teams. The workspace includes traditional project management software elements such as task management, gantt charts, resource management, issue tracking, and document management as well as application lifecycle management features such as change management, requirements management, test management, and bug tracking.
History
In March 2001, Citrix Systems agreed to purchase Sequoia Software for $185 million. The following year, former Sequoia CEO, Mark Wesker, along with other ex-Sequoia employees created Artifact Software. Artifact’s first product, CodeJack was a code sharing gateway for development teams to collaborate around their software artifacts and projects. CodeJack was eventually abandoned. In 2004, Artifact Software began developing its flagship product, Lighthouse, aided in part by a $5 million Series A financing shortly thereafter in July 2005. The first version of Lighthouse was launched in February 2007 with both a free and for fee version.
On September 21, 2009 Artifact Software changed both its company name and product name to workspace.com to better align its name with its product offering.
Development
Workspace.com is a project management tool offered as a service. It does not enforce any particular software development methodology. Unlike many other similar tools, it does not integrate into version control systems or integrated development environments; instead it acts as standalone software.
References
Software companies based in Maryland
Companies based in Columbia, Maryland
Collaborative software
Project management software
Software companies of the United States
https://j3lli.workplace.com
|
21786641
|
https://en.wikipedia.org/wiki/UNESCO
|
UNESCO
|
The United Nations Educational, Scientific and Cultural Organization (UNESCO) () is a specialised agency of the United Nations (UN) aimed at promoting world peace and security through international cooperation in education, arts, sciences, and culture. It has 193 member states and 11 associate members, as well as partners in the non-governmental, intergovernmental, and private sector. Headquartered at the World Heritage Centre in Paris, France, UNESCO has 53 regional field offices and 199 national commissions that facilitate its global mandate.
UNESCO was founded in 1945 as the successor to the League of Nations' International Committee on Intellectual Cooperation. Its constitution establishes the agency's goals, governing structure, and operating framework. UNESCO's founding mission, which was shaped by the Second World War, is to advance peace, sustainable development and human rights by facilitating collaboration and dialogue among nations. It pursues this objective through five major program areas: education, natural sciences, social/human sciences, culture and communication/information. UNESCO sponsors projects that improve literacy, provide technical training and education, advance science, protect independent media and press freedom, preserve regional and cultural history, and promote cultural diversity.
As a focal point for world culture and science, UNESCO's activities have broadened over the years; it assists in the translation and dissemination of world literature, helps establish and secure World Heritage Sites of cultural and natural importance, works to bridge the worldwide digital divide, and creates inclusive knowledge societies through information and communication. UNESCO has launched several initiatives and global movements, such as Education For All, to further advance its core objectives.
UNESCO is governed by the General Conference, composed of member states and associate members, which meets biannually to set the agency's programmes and the budget. It also elects members of the Executive Board, which manages UNESCO's work, and appoints every four years Director-General, who serves as UNESCO's chief administrator. UNESCO is a member of the United Nations Sustainable Development Group, a coalition of UN agencies and organisations aimed at fulfilling the Sustainable Development Goals.
History
Origins
UNESCO and its mandate for international cooperation can be traced back to a League of Nations resolution on 21 September 1921, to elect a Commission to study the feasibility of having nations freely share cultural, educational and scientific achievements. This new body, the International Committee on Intellectual Cooperation (ICIC) was created in 1922 and counted such figures as Henri Bergson, Albert Einstein, Marie Curie, Robert A. Millikan, and Gonzague de Reynold among its members (being thus a small commission of the League of Nations essentially centered on Western Europe). The International Institute for Intellectual Cooperation (IIIC) was then created in Paris in September 1924, to act as the executing agency for the ICIC. However, the onset of World War II largely interrupted the work of these predecessor organizations. As for private initiatives, the International Bureau of Education (IBE) began to work as a non-governmental organization in the service of international educational development since December 1925 and joined UNESCO in 2021, after having established a joint commission in 1952.
Creation
After the signing of the Atlantic Charter and the Declaration of the United Nations, the Conference of Allied Ministers of Education (CAME) began meetings in London which continued from 16 November 1942 to 5 December 1945. On 30 October 1943, the necessity for an international organization was expressed in the Moscow Declaration, agreed upon by China, the United Kingdom, the United States and the USSR. This was followed by the Dumbarton Oaks Conference proposals of 9 October 1944. Upon the proposal of CAME and in accordance with the recommendations of the United Nations Conference on International Organization (UNCIO), held in San Francisco in April–June 1945, a United Nations Conference for the establishment of an educational and cultural organization (ECO/CONF) was convened in London 1–16 November 1945 with 44 governments represented. The idea of UNESCO was largely developed by Rab Butler, the Minister of Education for the United Kingdom, who had a great deal of influence in its development. At the ECO/CONF, the Constitution of UNESCO was introduced and signed by 37 countries, and a Preparatory Commission was established. The Preparatory Commission operated between 16 November 1945, and 4 November 1946—the date when UNESCO's Constitution came into force with the deposit of the twentieth ratification by a member state.
The first General Conference took place from 19 November to 10 December 1946, and elected Dr. Julian Huxley to Director-General. U.S. Colonel, University president and civil rights advocate Dr. Blake R Van Leer joined as a member as well. The Constitution was amended in November 1954 when the General Conference resolved that members of the Executive Board would be representatives of the governments of the States of which they are nationals and would not, as before, act in their personal capacity. This change in governance distinguished UNESCO from its predecessor, the ICIC, in how member states would work together in the organization's fields of competence. As member states worked together over time to realize UNESCO's mandate, political and historical factors have shaped the organization's operations in particular during the Cold War, the decolonization process, and the dissolution of the USSR.
Development
Among the major achievements of the organization is its work against racism, for example through influential statements on race starting with a declaration of anthropologists (among them was Claude Lévi-Strauss) and other scientists in 1950 and concluding with the 1978 Declaration on Race and Racial Prejudice.
In 1956, member Blake R Van Leer was president at Georgia Tech and fought to allow the first African American to play in the 1956 Sugar Bowl. Later in In 1956, the Republic of South Africa withdrew from UNESCO saying that some of the organization's publications amounted to "interference" in the country's "racial problems". South Africa rejoined the organization in 1994 under the leadership of Nelson Mandela.
UNESCO's early work in the field of education included the pilot project on fundamental education in the Marbial Valley, Haiti, started in 1947.
This project was followed by expert missions to other countries, including, for example, a mission to Afghanistan in 1949.
In 1948, UNESCO recommended that Member States should make free primary education compulsory and universal. In 1990, the World Conference on Education for All, in Jomtien, Thailand, launched a global movement to provide basic education for all children, youths and adults. Ten years later, the 2000 World Education Forum held in Dakar, Senegal, led member governments to commit to achieving basic education for all by 2015.
UNESCO's early activities in culture included the Nubia Campaign, launched in 1960.
The purpose of the campaign was to move the Great Temple of Abu Simbel to keep it from being swamped by the Nile after the construction of the Aswan Dam. During the 20-year campaign, 22 monuments and architectural complexes were relocated. This was the first and largest in a series of campaigns including Mohenjo-daro (Pakistan), Fes (Morocco), Kathmandu (Nepal), Borobudur (Indonesia) and the Acropolis (Greece).
The organization's work on heritage led to the adoption, in 1972, of the Convention concerning the Protection of the World Cultural and Natural Heritage.
The World Heritage Committee was established in 1976 and the first sites inscribed on the World Heritage List in 1978.
Since then important legal instruments on cultural heritage and diversity have been adopted by UNESCO member states in 2003 (Convention for the Safeguarding of the Intangible Cultural Heritage) and 2005 (Convention on the Protection and Promotion of the Diversity of Cultural Expressions).
An intergovernmental meeting of UNESCO in Paris in December 1951 led to the creation of the European Council for Nuclear Research, which was responsible for establishing the European Organization for Nuclear Research (CERN) later on, in 1954.
Arid Zone programming, 1948–1966, is another example of an early major UNESCO project in the field of natural sciences.
In 1968, UNESCO organized the first intergovernmental conference aimed at reconciling the environment and development, a problem that continues to be addressed in the field of sustainable development. The main outcome of the 1968 conference was the creation of UNESCO's Man and the Biosphere Programme.
UNESCO has been credited with the diffusion of national science bureaucracies.
In the field of communication, the "free flow of ideas by word and image" has been in UNESCO's constitution from its beginnings, following the experience of the Second World War when control of information was a factor in indoctrinating populations for aggression. In the years immediately following World War II, efforts were concentrated on reconstruction and on the identification of needs for means of mass communication around the world. UNESCO started organizing training and education for journalists in the 1950s. In response to calls for a "New World Information and Communication Order" in the late 1970s, UNESCO established the International Commission for the Study of Communication Problems, which produced the 1980 MacBride report (named after the chair of the commission, the Nobel Peace Prize laureate Seán MacBride). The same year, UNESCO created the International Programme for the Development of Communication (IPDC), a multilateral forum designed to promote media development in developing countries. In 1991, UNESCO's General Conference endorsed the Windhoek Declaration on media independence and pluralism, which led the UN General Assembly to declare the date of its adoption, 3 May, as World Press Freedom Day. Since 1997, UNESCO has awarded the UNESCO / Guillermo Cano World Press Freedom Prize every 3 May. In the lead up to the World Summit on the Information Society in 2003 (Geneva) and 2005 (Tunis), UNESCO introduced the Information for All Programme.
21st century
UNESCO admitted Palestine as a member in 2011.
Laws passed in the United States after Palestine applied for UNESCO and WHO membership in April 1989 mean that the US cannot contribute financially to any UN organisation that accepts Palestine as a full member. As a result, the US withdrew its funding, which had accounted for about 22% of UNESCO's budget. Israel also reacted to Palestine's admittance to UNESCO by freezing Israeli payments to UNESCO and imposing sanctions on the Palestinian Authority, stating that Palestine's admittance would be detrimental "to potential peace talks". Two years after they stopped paying their dues to UNESCO, the US and Israel lost UNESCO voting rights in 2013 without losing the right to be elected; thus, the US was elected as a member of the Executive Board for the period 2016–19. In 2019, Israel left UNESCO after 69 years of membership, with Israel's ambassador to the UN Danny Danon writing: "UNESCO is the body that continually rewrites history, including by erasing the Jewish connection to Jerusalem... it is corrupted and manipulated by Israel's enemies... we are not going to be a member of an organisation that deliberately acts against us".
Activities
UNESCO implements its activities through the five program areas: education, natural sciences, social and human sciences, culture, and communication and information.
UNESCO supports research in comparative education, provides expertise and fosters partnerships to strengthen national educational leadership and the capacity of countries to offer quality education for all. This includes the
UNESCO Chairs, an international network of 644 UNESCO Chairs, involving over 770 institutions in 126 countries
Environmental Conservation Organisation
Convention against Discrimination in Education adopted in 1960
Organization of the International Conference on Adult Education (CONFINTEA) in an interval of 12 years
Publication of the Education for All Global Monitoring Report
Publication of the Four Pillars of Learning seminal document
UNESCO ASPNet, an international network of 8,000 schools in 170 countries
UNESCO does not accredit institutions of higher learning.
UNESCO also issues public statements to educate the public:
Seville Statement on Violence: A statement adopted by UNESCO in 1989 to refute the notion that humans are biologically predisposed to organised violence.
Designating projects and places of cultural and scientific significance, such as:
Global Geoparks Network
Biosphere reserves, through the Programme on Man and the Biosphere (MAB), since 1971
City of Literature; in 2007, the first city to be given this title was Edinburgh, the site of Scotland's first circulating library. In 2008, Iowa City, Iowa, became the City of Literature.
Endangered languages and linguistic diversity projects
Masterpieces of the Oral and Intangible Heritage of Humanity
Memory of the World International Register, since 1997
Water resources management, through the International Hydrological Programme (IHP), since 1965
World Heritage Sites
World Digital Library
Encouraging the "free flow of ideas by images and words" by:
Promoting freedom of expression, including freedom of the press and freedom of information legislation, through the Division of Freedom of Expression and Media Development, including the International Programme for the Development of Communication
Promoting the safety of journalists and combatting impunity for those who attack them, through coordination of the UN Plan of Action on the Safety of Journalists and the Issue of Impunity
Promoting universal access to and preservation of information and open solutions for sustainable development through the Knowledge Societies Division, including the Memory of the World Programme and Information for All Programme
Promoting pluralism, gender equality and cultural diversity in the media
Promoting Internet Universality and its principles, that the Internet should be (I) human Rights-based, (ii) Open, (iii) Accessible to all, and (iv) nurtured by Multi-stakeholder participation (summarized as the acronym R.O.A.M.)
Generating knowledge through publications such as World Trends in Freedom of Expression and Media Development, the UNESCO Series on Internet Freedom, and the Media Development Indicators, as well as other indicator-based studies.
Promoting events, such as:
International Decade for the Promotion of a Culture of Peace and Non-Violence for the Children of the World: 2001–2010, proclaimed by the UN in 1998
World Press Freedom Day, 3 May each year, to promote freedom of expression and freedom of the press as a basic human right and as crucial components of any healthy, democratic and free society.
Criança Esperança in Brazil, in partnership with Rede Globo, to raise funds for community-based projects that foster social integration and violence prevention.
International Literacy Day
International Year for the Culture of Peace
Health Education for Behavior Change program in partnership with the Ministry of Education of Kenya which was financially supported by the Government of Azerbaijan to promote health education among 10-19-year-old young people who live in informal camp in Kibera, Nairobi. The project was carried out between September 2014 – December 2016.
Founding and funding projects, such as:
Migration Museums Initiative: Promoting the establishment of museums for cultural dialogue with migrant populations.
UNESCO-CEPES, the European Centre for Higher Education: established in 1972 in Bucharest, Romania, as a de-centralized office to promote international co-operation in higher education in Europe as well as Canada, USA and Israel. Higher Education in Europe is its official journal.
Free Software Directory: since 1998 UNESCO and the Free Software Foundation have jointly funded this project cataloguing free software.
FRESH, Focusing Resources on Effective School Health
OANA, Organization of Asia-Pacific News Agencies
International Council of Science
UNESCO Goodwill Ambassadors
ASOMPS, Asian Symposium on Medicinal Plants and Spices, a series of scientific conferences held in Asia
Botany 2000, a programme supporting taxonomy, and biological and cultural diversity of medicinal and ornamental plants, and their protection against environmental pollution
The UNESCO Collection of Representative Works, translating works of world literature both to and from multiple languages, from 1948 to 2005
GoUNESCO, an umbrella of initiatives to make heritage fun supported by UNESCO, New Delhi Office
The UNESCO transparency portal has been designed to enable public access to information regarding the Organization's activities, such as its aggregate budget for a biennium, as well as links to relevant programmatic and financial documents. These two distinct sets of information are published on the IATI registry, respectively based on the IATI Activity Standard and the IATI Organization Standard.
There have been proposals to establish two new UNESCO lists. The first proposed list will focus on movable cultural heritage such as artifacts, paintings, and biofacts. The list may include cultural objects, such as the Jōmon Venus of Japan, the Mona Lisa of France, the Gebel el-Arak Knife of Egypt, The Ninth Wave of Russia, the Seated Woman of Çatalhöyük of Turkey, the David (Michelangelo) of Italy, the Mathura Herakles of India, the Manunggul Jar of the Philippines, the Crown of Baekje of South Korea, The Hay Wain of the United Kingdom and the Benin Bronzes of Nigeria. The second proposed list will focus on the world's living species, such as the komodo dragon of Indonesia, the panda of China, the bald eagle of North American countries, the aye-aye of Madagascar, the Asiatic lion of India, the kakapo of New Zealand, and the mountain tapir of Colombia, Ecuador and Peru.
Media
UNESCO and its specialized institutions issue a number of magazines.
The UNESCO Courier magazine states its mission to "promote UNESCO's ideals, maintain a platform for the dialogue between cultures and provide a forum for international debate". Since March 2006 it has been available online, with limited printed issues. Its articles express the opinions of the authors which are not necessarily the opinions of UNESCO. There was a hiatus in publishing between 2012 and 2017.
In 1950, UNESCO initiated the quarterly review Impact of Science on Society (also known as Impact) to discuss the influence of science on society. The journal ceased publication in 1992. UNESCO also published Museum International Quarterly from the year 1948.
Official UNESCO NGOs
UNESCO has official relations with 322 international non-governmental organizations (NGOs). Most of these are what UNESCO calls "operational"; a select few are "formal".
The highest form of affiliation to UNESCO is "formal associate", and the 22 NGOs with formal associate (ASC) relations occupying offices at UNESCO are:
Institutes and centers
The institutes are specialized departments of the organization that support UNESCO's programme, providing specialized support for cluster and national offices.
Prizes
UNESCO awards 22 prizes in education, science, culture and peace:
Félix Houphouët-Boigny Peace Prize
L'Oréal-UNESCO Awards for Women in Science
UNESCO/King Sejong Literacy Prize
UNESCO/Confucius Prize for Literacy
UNESCO/Emir Jaber al-Ahmad al-Jaber al-Sabah Prize to promote Quality Education for Persons with Intellectual Disabilities
UNESCO King Hamad Bin Isa Al-Khalifa Prize for the Use of Information and Communication Technologies in Education
UNESCO/Hamdan Bin Rashid Al-Maktoum Prize for Outstanding Practice and Performance in Enhancing the Effectiveness of Teachers
UNESCO/Kalinga Prize for the Popularization of Science
UNESCO/Institut Pasteur Medal for an outstanding contribution to the development of scientific knowledge that has a beneficial impact on human health
UNESCO/Sultan Qaboos Prize for Environmental Preservation
Great Man-Made River International Water Prize for Water Resources in Arid Zones presented by UNESCO (title to be reconsidered)
Michel Batisse Award for Biosphere Reserve Management
UNESCO/Bilbao Prize for the Promotion of a Culture of Human Rights
UNESCO Prize for Peace Education
UNESCO-Madanjeet Singh Prize for the Promotion of Tolerance and Non-Violence
UNESCO/International José Martí Prize
UNESCO/Avicenna Prize for Ethics in Science
UNESCO/Juan Bosch Prize for the Promotion of Social Science Research in Latin America and the Caribbean
Sharjah Prize for Arab Culture
Melina Mercouri International Prize for the Safeguarding and Management of Cultural Landscapes (UNESCO-Greece)
IPDC-UNESCO Prize for Rural Communication
UNESCO/Guillermo Cano World Press Freedom Prize
UNESCO/Jikji Memory of the World Prize
UNESCO-Equatorial Guinea International Prize for Research in the Life Sciences
Carlos J. Finlay Prize for Microbiology
Inactive prizes
International Simón Bolívar Prize (inactive since 2004)
UNESCO Prize for Human Rights Education
UNESCO/Obiang Nguema Mbasogo International Prize for Research in the Life Sciences (inactive since 2010)
UNESCO Prize for the Promotion of the Arts
International Days observed at UNESCO
International Days observed at UNESCO is provided in the table given below:
Member states
As of January 2019, UNESCO has 193 member states and 11 associate members. Some members are not independent states and some members have additional National Organizing Committees from some of their dependent territories. UNESCO state parties are the United Nations member states (except Liechtenstein, United States and Israel), as well as Cook Islands, Niue and Palestine. The United States and Israel left UNESCO on 31 December 2018.
Governing bodies
Director-General
As of 2022, there have been 11 Directors-General of UNESCO since its inceptionnine men and two women. The 11 Directors-General of UNESCO have come from six regions within the organization: West Europe (5), Central America (1), North America (2), West Africa (1), East Asia (1), and East Europe (1).
To date, there has been no elected Director-General from the remaining ten regions within UNESCO: Southeast Asia, South Asia, Central and North Asia, Middle East, North Africa, East Africa, Central Africa, South Africa, Australia-Oceania, and South America.
The list of the Directors-General of UNESCO since its establishment in 1946 is as follows:
General Conference
This is the list of the sessions of the UNESCO General Conference held since 1946:
Executive Board
Offices and headquarters
The UNESCO headquarters, the World Heritage Centre, is located at Place de Fontenoy in Paris, France. Its architect was Marcel Breuer. It includes a Garden of Peace which was donated by the Government of Japan. This garden was designed by American-Japanese sculptor artist Isamu Noguchi in 1958 and installed by Japanese gardener Toemon Sano. In 1994–1995, in memory of the 50th anniversary of UNESCO, a meditation room was built by Tadao Ando.
UNESCO's field offices across the globe are categorized into four primary office types based upon their function and geographic coverage: cluster offices, national offices, regional bureaus and liaison offices.
Field offices by region
The following list of all UNESCO Field Offices is organized geographically by UNESCO Region and identifies the members states and associate members of UNESCO which are served by each office.
Africa
Abidjan – National Office to Côte d'Ivoire
Abuja – National Office to Nigeria
Accra – Cluster Office for Benin, Côte d'Ivoire, Ghana, Liberia, Nigeria, Sierra Leone and Togo
Addis Ababa – Liaison Office with the African Union and with the Economic Commission for Africa
Bamako – Cluster Office for Burkina Faso, Guinea, Mali and Niger
Brazzaville – National Office to the Republic of the Congo
Bujumbura – National Office to Burundi
Dakar – Regional Bureau for Education in Africa and Cluster Office for Cape Verde, Gambia, Guinea-Bissau, and Senegal
Dar es Salaam – Cluster Office for Comoros, Madagascar, Mauritius, Seychelles and Tanzania
Harare – Cluster Office for Botswana, Malawi, Mozambique, Zambia and Zimbabwe
Juba – National Office to South Sudan
Kinshasa – National Office to the Democratic Republic of the Congo
Libreville – Cluster Office for the Republic of the Congo, Democratic Republic of the Congo, Equatorial Guinea, Gabon and Sao Tome and Principe
Maputo – National Office to Mozambique
Nairobi – Regional Bureau for Sciences in Africa and Cluster Office for Burundi, Djibouti, Eritrea, Kenya, Rwanda, Somalia, South Sudan and Uganda
Windhoek – National Office to Namibia
Yaoundé – Cluster Office to Cameroon, Central African Republic and Chad
Arab States
Amman – National Office to Jordan
Beirut – Regional Bureau for Education in the Arab States and Cluster Office to Lebanon, Syria, Jordan, Iraq and Palestine
Cairo – Regional Bureau for Sciences in the Arab States and Cluster Office for Egypt and Sudan
Doha – Cluster Office to Bahrain, Kuwait, Oman, Qatar, Saudi Arabia, United Arab Emirates and Yemen
Iraq – National Office for Iraq (currently located in Amman, Jordan)
Khartoum – National Office to Sudan
Manama – Arab Regional Centre for World Heritage
Rabat – Cluster Office to Algeria, Libya, Mauritania, Morocco and Tunisia
Ramallah – National Office to the Palestinian Territories
Asia and Pacific
Apia – Cluster Office to Australia, Cook Islands, Fiji, Kiribati, Marshall Islands, Federated States of Micronesia, Nauru, New Zealand, Niue, Palau, Papua New Guinea, Samoa, Solomon Islands, Tonga, Tuvalu, Vanuatu and Tokelau (Associate Member)
Bangkok – Regional Bureau for Education in Asia and the Pacific and Cluster Office to Thailand, Burma, Laos, Singapore and Vietnam
Beijing – Cluster Office to North Korea, Japan, Mongolia, the People's Republic of China and South Korea
Dhaka – National Office to Bangladesh
Hanoi – National Office to Vietnam
Islamabad – National Office to Pakistan
Jakarta – Regional Bureau for Sciences in Asia and the Pacific and Cluster Office to the Philippines, Brunei, Indonesia, Malaysia, and East Timor
Manila – National Office to the Philippines
Kabul – National Office to Afghanistan
Kathmandu – National Office to Nepal
New Delhi – Cluster Office to Bangladesh, Bhutan, India, Maldives and Sri Lanka
Phnom Penh – National Office to Cambodia
Tashkent – National Office to Uzbekistan
Tehran – Cluster Office to Afghanistan, Iran, Pakistan and Turkmenistan
Europe and North America
Almaty – Cluster Office to Kazakhstan, Kyrgyzstan, Tajikistan and Uzbekistan
Brussels – Liaison Office to the European Union and its subsidiary bodies in Brussels
Geneva – Liaison Office to the United Nations in Geneva
New York City – Liaison Office to the United Nations in New York
Venice – Regional Bureau for Sciences and Culture in Europe
Latin America and the Caribbean
Brasilia – National Office to Brazil
Guatemala City – National Office to Guatemala
Havana – Regional Bureau for Culture in Latin America and the Caribbean and Cluster Office to Cuba, Dominican Republic, Haiti and Aruba
Kingston – Cluster Office to Antigua and Barbuda, Bahamas, Barbados, Belize, Dominica, Grenada, Guyana, Jamaica, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Suriname and Trinidad and Tobago as well as the associate member states of British Virgin Islands, Cayman Islands, Curaçao and Sint Maarten
Lima – National Office to Peru
Mexico City – National Office to Mexico
Montevideo – Regional Bureau for Sciences in Latin America and the Caribbean and Cluster Office to Argentina, Brazil, Chile, Paraguay and Uruguay
Port-au-Prince – National Office to Haiti
Quito – Cluster Office to Bolivia, Colombia, Ecuador and Venezuela
San José – Cluster Office to Costa Rica, El Salvador, Guatemala, Honduras, Mexico, Nicaragua and Panama
Santiago de Chile – Regional Bureau for Education in Latin America and the Caribbean and National Office to Chile
Partner Organisations
International Committee of the Red Cross (ICRC)
Blue Shield International (BSI)
International Council of Museums (ICOM)
International Council on Monuments and Sites (ICOMOS)
International Institute of Humanitarian Law (IIHL)
Controversies
New World Information and Communication Order
UNESCO has been the centre of controversy in the past, particularly in its relationships with the United States, the United Kingdom, Singapore and the former Soviet Union. During the 1970s and 1980s, UNESCO's support for a "New World Information and Communication Order" and its MacBride report calling for democratization of the media and more egalitarian access to information was condemned in these countries as attempts to curb freedom of the press. UNESCO was perceived as a platform for communists and Third World dictators to attack the West, in contrast to accusations made by the USSR in the late 1940s and early 1950s. In 1984, the United States withheld its contributions and withdrew from the organization in protest, followed by the United Kingdom in 1985. Singapore withdrew also at the end of 1985, citing rising membership fees. Following a change of government in 1997, the UK rejoined. The United States rejoined in 2003, followed by Singapore on 8 October 2007.
Israel
Israel was admitted to UNESCO in 1949, one year after its creation. Israel has maintained its membership since 1949.
In 2010, Israel designated the Cave of the Patriarchs, Hebron and Rachel's Tomb, Bethlehem as National Heritage Sites and announced restoration work, prompting criticism from the Obama administration and protests from Palestinians. In October 2010, UNESCO's Executive Board voted to declare the sites as "al-Haram al-Ibrahimi/Tomb of the Patriarchs" and "Bilal bin Rabah Mosque/Rachel's Tomb" and stated that they were "an integral part of the occupied Palestinian Territories" and any unilateral Israeli action was a violation of international law.
UNESCO described the sites as significant to "people of the Muslim, Christian and Jewish traditions", and accused Israel of highlighting only the Jewish character of the sites.
Israel in turn accused UNESCO of "detach[ing] the Nation of Israel from its heritage", and accused it of being politically motivated.
The Rabbi of the Western Wall said that Rachel's tomb had not previously been declared a holy Muslim site. Israel partially suspended ties with UNESCO. Israeli Deputy Foreign Minister Danny Ayalon declared that the resolution was a "part of Palestinian escalation".
Zevulun Orlev, chairman of the Knesset Education and Culture Committee, referred to the resolutions as an attempt to undermine the mission of UNESCO as a scientific and cultural organization that promotes cooperation throughout the world.
On 28 June 2011, UNESCO's World Heritage Committee, at Jordan's insistence, censured Israel's decision to demolish and rebuild the Mughrabi Gate Bridge in Jerusalem for safety reasons. Israel stated that Jordan had signed an agreement with Israel stipulating that the existing bridge must be dismantled for safety reasons; Jordan disputed the agreement, saying that it was only signed under U.S. pressure. Israel was also unable to address the UNESCO committee over objections from Egypt.
In January 2014, days before it was scheduled to open, UNESCO Director-General, Irina Bokova, "indefinitely postponed" and effectively cancelled an exhibit created by the Simon Wiesenthal Center entitled "The People, The Book, The Land: The 3,500-year relationship between the Jewish people and the Land of Israel". The event was scheduled to run from 21 January through 30 January in Paris. Bokova cancelled the event after representatives of Arab states at UNESCO argued that its display would "harm the peace process". The author of the exhibition, Professor Robert Wistrich of the Hebrew University's Vidal Sassoon International Center for the Study of Anti-Semitism, called the cancellation an "appalling act", and characterized Bokova's decision as "an arbitrary act of total cynicism and, really, contempt for the Jewish people and its history". UNESCO amended the decision to cancel the exhibit within the year, and it quickly achieved popularity and was viewed as a great success.
On 1 January 2019, Israel formally left UNESCO in pursuance of the US withdrawal over the perceived continuous anti-Israel bias.
Occupied Palestine Resolution
On 13 October 2016, UNESCO passed a resolution on East Jerusalem that condemned Israel for "aggressions" by Israeli police and soldiers and "illegal measures" against the freedom of worship and Muslims' access to their holy sites, while also recognizing Israel as the occupying power. Palestinian leaders welcomed the decision. While the text acknowledged the "importance of the Old City of Jerusalem and its walls for the three monotheistic religions", it referred to the sacred hilltop compound in Jerusalem's Old City only by its Muslim name "Al-Haram al-Sharif", Arabic for Noble Sanctuary. In response, Israel denounced the UNESCO resolution for its omission of the words "Temple Mount" or "Har HaBayit", stating that it denies Jewish ties to the key holy site. After receiving criticism from numerous Israeli politicians and diplomats, including Benjamin Netanyahu and Ayelet Shaked, Israel froze all ties with the organization. The resolution was condemned by Ban Ki-moon and the Director-General of UNESCO, Irina Bokova, who said that Judaism, Islam and Christianity have clear historical connections to Jerusalem and "to deny, conceal or erase any of the Jewish, Christian or Muslim traditions undermines the integrity of the site. "Al-Aqsa Mosque [or] Al-Haram al-Sharif" is also Temple Mount, whose Western Wall is the holiest place in Judaism." It was also rejected by the Czech Parliament which said the resolution reflects a "hateful anti-Israel sentiment", and hundreds of Italian Jews demonstrated in Rome over Italy's abstention. On 26 October, UNESCO approved a reviewed version of the resolution, which also criticized Israel for its continuous "refusal to let the body's experts access Jerusalem's holy sites to determine their conservation status". Despite containing some softening of language following Israeli protests over a previous version, Israel continued to denounce the text. The resolution refers to the site Jews and Christians refer to as the Temple Mount, or Har HaBayit in Hebrew, only by its Arab name — a significant semantic decision also adopted by UNESCO's executive board, triggering condemnation from Israel and its allies. U.S. Ambassador Crystal Nix Hines stated: "This item should have been defeated. These politicized and one-sided resolutions are damaging the credibility of UNESCO."
In October 2017, the United States and Israel announced they would withdraw from the organization, citing in-part anti-Israel bias.
Palestine
Palestinian youth magazine controversy
In February 2011, an article was published in a Palestinian youth magazine in which a teenage girl described one of her four role models as Adolf Hitler. In December 2011, UNESCO, which partly funded the magazine, condemned the material and subsequently withdrew support.
Islamic University of Gaza controversy
In 2012, UNESCO decided to establish a chair at the Islamic University of Gaza in the field of astronomy, astrophysics, and space sciences, fueling controversy and criticism. Israel bombed the school in 2008 stating that they develop and store weapons there, which Israel restated in criticizing UNESCO's move.
The head, Kamalain Shaath, defended UNESCO, stating that "the Islamic University is a purely academic university that is interested only in education and its development". Israeli ambassador to UNESCO Nimrod Barkan planned to submit a letter of protest with information about the university's ties to Hamas, especially angry that this was the first Palestinian university that UNESCO chose to cooperate with. The Jewish organization B'nai B'rith criticized the move as well.
Che Guevara
In 2013, UNESCO announced that the collection "The Life and Works of Ernesto Che Guevara" became part of the Memory of the World Register. US Congresswoman Ileana Ros-Lehtinen condemned this decision, saying that the organization acts against its own ideals:
UN Watch also condemned this selection by UNESCO.
Listing Nanjing Massacre documents
In 2015, Japan threatened to halt funding for UNESCO over the organization's decision to include documents relating to the 1937 Nanjing massacre in the latest listing for its "Memory of the World" program. In October 2016, Japanese Foreign Minister Fumio Kishida confirmed that Japan's 2016 annual funding of ¥4.4 billion had been suspended, although he denied any direct link with the Nanjing document controversy.
US withdrawals
The United States withdrew from UNESCO in 1984, citing the "highly politicized" nature of the organisation, its ostensible "hostility toward the basic institutions of a free society, especially a free market and a free press", as well as its "unrestrained budgetary expansion", and poor management under then Director-General Amadou-Mahter M'Bow of Senegal.
On 19 September 1989, former U.S. Congressman Jim Leach stated before a Congressional subcommittee:
Leach concluded that the record showed Israel bashing, a call for a new world information order, money management, and arms control policy to be the impetus behind the withdrawal; he asserted that before departing from UNESCO, a withdrawal from the IAEA had been pushed on him. On 1 October 2003, the U.S. rejoined UNESCO.
On 12 October 2017, the United States notified UNESCO that it will again withdraw from the organization on 31 December 2018 and will seek to establish a permanent observer mission beginning in 2019. The Department of State cited "mounting arrears at UNESCO, the need for fundamental reform in the organization, and continuing anti-Israel bias at UNESCO". Israel praised the withdrawal decision as "brave" and "moral".
The United States has not paid over $600 million in dues since it stopped paying its $80 million annual UNESCO dues when Palestine became a full member in 2011. Israel and the US were among the 14 votes against the membership out of 194 member countries.
Kurdish-Turkish conflict
On 25 May 2016, the noted Turkish poet and human rights activist Zülfü Livaneli resigned as Turkey's only UNESCO goodwill ambassador. He highlighted the human rights situation in Turkey and the destruction of historical Sur district of Diyarbakir, the largest city in Kurdish-majority southeast Turkey, during fighting between the Turkish army and Kurdish militants as the main reasons for his resignation. Livaneli said: "To pontificate on peace while remaining silent against such violations is a contradiction of the fundamental ideals of UNESCO."
Campaigns against illicit art trading
UNESCO has drawn criticism for aspects of its 2020 celebration of the 50th anniversary of the 1970 convention against the illicit trade of cultural property.
The UNESCO 1970 Convention marked a move towards cultural nationalism. The April 1863 Friedman 'codes of conduct' for warfare and cultural property (backed by The Hague Convention's 'all man kind' mantra) followed an international approach, where cultural objects were 'fair game', so long as not destroyed for the benefit of the global knowledge pool. In 1970, UNESCO pioneered and documented a new national approach, where the importation of illicit cultural objects, for example, the results of plundered territories or invaded land (see James Cook & The Gweagal Shield; Elgin Marbles) should be prevented. Furthermore, the Articles demand the repatriation of objects that are still in the possession of those who accessed it illicitly.
These two approaches are neatly defined as cultural internationalism and cultural nationalism. Neither has prevailed persuasively in academia, though cultural nationalism is campaigned most prominently. Merryman, pioneer academic for art and cultural law, notes the benefit for society in debating the two paridigms given neither has prevailed in history.
In 2020 UNESCO stated that the size of the illicit trade in cultural property amounted to 10 billion dollars a year. A report that same year by the Rand Organisation suggested the actual market is "not likely to be larger than a few hundred million dollars each year". An expert cited by UNESCO as attributing the 10 billion figure denied it and said he had "no idea" where the figure came from. Art dealers were particularly critical of the UNESCO figure, because it amounted to 15% of the total world art market.
In November 2020 part of a UNESCO advertising campaign intended to highlight international trafficking in looted artefacts had to be withdrawn, after it falsely presented a series of museum-held artworks with known provenances as recently looted objects held in private collections. The adverts claimed that a head of Buddha in the Metropolitan Museum's collection since 1930 had been looted from Kabul Museum in 2001 and then smuggled into the US art market; that a funerary monument from Palmyra that the MET had acquired in 1901 had been recently looted from the Palmyra Museum by Islamic State militants and then smuggled into the European antiquities market, and that an Ivory Coast mask with a provenance that indicates it was in the US by 1954 was looted during armed clashes in 2010–2011. After complaints from the MET, the adverts were withdrawn.
Products and services
UNESDOC Database – Contains over 146,000 UNESCO documents in full text published since 1945 as well as metadata from the collections of the UNESCO Library and documentation centres in field offices and institutes.
Information processing tools
UNESCO develops, maintains and disseminates, free of charge, two interrelated software packages for database management (CDS/ISIS [not to be confused with UK police software package ISIS]) and data mining/statistical analysis (IDAMS).
CDS/ISIS – a generalised information storage and retrieval system. The Windows version may run on a single computer or in a local area network. The JavaISIS client/server components allow remote database management over the Internet and are available for Windows, Linux and Macintosh. Furthermore, GenISIS allows the user to produce HTML Web forms for CDS/ISIS database searching. The ISIS_DLL provides an API for developing CDS/ISIS based applications.
OpenIDAMS – a software package for processing and analysing numerical data developed, maintained and disseminated by UNESCO. The original package was proprietary but UNESCO has initiated a project to provide it as open-source.
IDIS – a tool for direct data exchange between CDS/ISIS and IDAMS
See also
Academic Mobility Network
League of Nations archives
Total Digital Access to the League of Nations Archives Project (LONTAD)
UNESCO Intangible Cultural Heritage Lists
UNESCO Reclining Figure 1957–58, sculpture by Henry Moore
UniRef
Further reading
Finnemore, Martha. 1993. "International Organizations as Teachers of Norms: The United Nations Educational, Scientific, and Cutural Organization and Science Policy." International Organization Vol. 47, No. 4 (Autumn, 1993), pp. 565–597
References
External links
Organizations established in 1945
Conservation and restoration organizations
Heritage organizations
International cultural organizations
International educational organizations
International scientific organizations
International organizations based in France
Organizations based in Paris
United Nations Development Group
United Nations specialized agencies
France and the United Nations
1945 establishments in France
Peace organizations
|
3980464
|
https://en.wikipedia.org/wiki/Macosquin
|
Macosquin
|
Macosquin () is a small village, townland, and civil parish in County Londonderry, Northern Ireland. It is south-west of Coleraine, on the road to Limavady. In the 2011 Census it had a population of 614 people. The area is known for its caves and springs. It is situated within Causeway Coast and Glens district.
History
Following fast growth in the 1950s and 1960s the village had a peak population of over 800 in the 1970s, but this has shrunk to a 2011 population of 614.
Churches
Nearest Religious buildings in Macosquin Village/Macosquin District:
St. Mary's Church of Ireland Parish Church
Macosquin Presbyterian Church
Coleraine Gospel Hall
2001 Census
Macosquin is classified as a small village or hamlet by the NI Statistics and Research Agency (NISRA) (i.e. with population between 500 and 1,000 people). On Census day 2011 there were 614 people living in Macosquin. Of these:
36.97% were aged under 16 years and 18.89% were aged 60 and over
48.53% of the population were male and 51.46% were female
3.09% were from a Catholic background, 82.57% were from a Protestant background, and 9.6% considered themselves as having ‘no religion’
2.93% of people aged 16–74 were unemployed
For more details see: NI Neighbourhood Information
See also
List of civil parishes of County Londonderry
References
External links
BBC - Plantation of Ulster - Macosquin
Culture Northern Ireland
Macosquin Primary School
Villages in County Londonderry
Townlands of County Londonderry
Civil parishes of County Londonderry
Causeway Coast and Glens district
|
5290466
|
https://en.wikipedia.org/wiki/University%20of%20Pannonia
|
University of Pannonia
|
The University of Pannonia (University of Veszprém until March 1, 2006; Hungarian Pannon Egyetem, formerly known as Veszprémi Egyetem) is a university located in Veszprém, Hungary. It was founded in 1949 and is organized in five faculties: Arts and Humanities, Engineering, Agriculture, Economics and Information Technology.
History and profile
The university was founded in 1949. In the beginning it worked as a regional faculty of the Technical University of Budapest. In 1951, it became independent under the name of Veszprém University of Chemical Engineering. From 1991, the university has been called the University of Veszprém.
The university first offered courses in four areas of Chemical Technology: Oil and Coal Technology, Electrochemical Industry, Inorganic Chemical Technology, Silicate Chemistry. From the mid-1960s two courses — Nuclear Chemistry and Technology, Process Control and System Engineering — became part of the Chemical Engineering education in Veszprém.
The changing and increasing requirements set for the graduates persuaded the university to continually reform and restructure its education activity. As a result, new courses were introduced: agrochemistry in 1970, Chemical Engineering Management in 1973, higher level foreign language teaching in 1983 and Instrumentation and Measurement Techniques in 1984.
The restructuring process accelerated in the past few years and this resulted in the renewal and expansion of the university's education profile.
To respond to the society's growing demand for computer professionals, with the help of external financial support and the university's scientific expertise, the education infrastructure of the Information Technology and Automation courses has been created.
As a result of the increasing openness of Hungary, the need for teachers of foreign languages increased considerably. Having recognized this, the university introduced Teacher Training courses for teachers of English and then for teachers of German and French and the education of philologists in specialties: Hungarian language and literature, theatre sciences. etc. In the meantime, the education of Catholic theologists started in the form of a regional faculty of the Theologic College. Simultaneously, the Faculty of Teacher Training (now: Faculty of Arts) and the Faculty of Engineering were established and the name of the university was changed to University of Veszprém.
The centre of scientific and cultural life, the University of Veszprém with the 200-year-old Georgikon Faculty of Agriculture turned into a three-faculty university on 1 January 2000. On 1 September 2003, two new faculties were created: the Faculty of Economics and the Faculty of Information Technology.
Every year the University of Pannonia hosts national and international research conferences, which strengthen its international reputation. In the near future, the offer will include new faculties and new schools. The leaders of the institution strive to turn the university into the educational, intellectual, and research centre of the Transdanubian region and to help find its place in Europe.
Organization
These are the five faculties:
Faculty of Economic Sciences
Faculty of Engineering
Georgikon Faculty of Agriculture (in Keszthely)
Faculty of Information Technology
Faculty of Modern Philology and Social Sciences (former Faculty of Arts)
Rectors
Károly Polinszky
Endre Bereczky
Ernő Nemecz
Károly Polinszky
Antal László
Pál Káldi
Ernő Nemecz
János Inczédy
Bálint Heil
János Liszi
István Győri
Zoltán Gaál
Ákos Rédey
Ferenc Friedler
András Gelencsér (current)
See also
List of colleges and universities
Veszprém
Sándor Dominich
External links
University of Pannonia Website
University of Pannonia
Educational institutions established in 1949
Education in Veszprém County
Buildings and structures in Veszprém County
1949 establishments in Hungary
|
44275267
|
https://en.wikipedia.org/wiki/Bangalore
|
Bangalore
|
Bangalore (), officially known as Bengaluru (), is the capital and the largest city of the Indian state of Karnataka. It has a population of more than and a metropolitan population of around , making it the third most populous city and fifth most populous urban agglomeration in India. Located in southern India on the Deccan Plateau, at a height of over above sea level, Bangalore is known for its pleasant climate throughout the year. Its elevation is the highest among the major cities of India.
The city's history dates back to around 890 CE, in a stone inscription found at the Nageshwara Temple in Begur, Bangalore. The Begur inscription is written in Halegannada (ancient Kannada), and mentions 'Bengaluru Kalaga' (battle of Bengaluru). It was a significant turning point in the history of Bangalore as it bears the earliest reference to the name 'Bengaluru'. In 1537 CE, Kempé Gowdā – a feudal ruler under the Vijayanagara Empire – established a mud fort considered to be the foundation of modern Bangalore and its oldest areas, or petes, which exist to the present day. After the fall of Vijayanagar empire in 16th century, the Mughals sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. When Haider Ali seized control of the Kingdom of Mysore, the administration of Bangalore passed into his hands.
The city was captured by the British East India Company after victory in the Fourth Anglo-Mysore War (1799), who returned administrative control of the city to the Maharaja of Mysore. The old city developed in the dominions of the Maharaja of Mysore and was made capital of the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. In 1809, the British shifted their cantonment to Bangalore, outside the old city, and a town grew up around it, which was governed as part of British India. Following India's independence in 1947, Bangalore became the capital of Mysore State, and remained capital when the new Indian state of Karnataka was formed in 1956. The two urban settlements of Bangalore – city and cantonment – which had developed as independent entities merged into a single urban centre in 1949. The existing Kannada name, Bengalūru, was declared the official name of the city in 2006.
Bangalore is widely regarded as the "Silicon Valley of India" (or "IT capital of India") because of its role as the nation's leading information technology (IT) exporter. Indian technological organisations are headquartered in the city. A demographically diverse city, Bangalore is the second fastest-growing major metropolis in India. Recent estimates of the metro economy of its urban area have ranked Bangalore either the fourth- or fifth-most productive metro area of India. , Bangalore was home to 7,700 millionaires and 8 billionaires with a total wealth of $320 billion. It is home to many educational and research institutions. Numerous state-owned aerospace and defence organisations are located in the city. The city also houses the Kannada film industry. It was ranked the most liveable Indian city with a population of over a million under the Ease of Living Index 2020.
Etymology
The name "Bangalore" represents an anglicised version of the city's Kannada name Bengalūru (). It was the name of a village near Kodigehalli in Bangalore city today and was used by Kempegowda to christen the city as Bangalore at the time of its foundation. The earliest reference to the name "Bengalūru" was found in a ninth-century Western Ganga dynasty stone inscription on a vīra gallu (; , a rock edict extolling the virtues of a warrior). In an inscription found in Begur, "Bengalūrū" is referred to as a place in which a battle was fought in 890 CE.
An apocryphal story recounts that the twelfth century Hoysala king Veera Ballala II, while on a hunting expedition, lost his way in the forest. Tired and hungry, he came across a poor old woman who served him boiled beans. The grateful king named the place "benda-kaal-uru" (literally, "town of boiled beans"), which eventually evolved into "Bengalūru". Suryanath Kamath has put forward an explanation of a possible floral origin of the name, being derived from benga, the Kannada term for Pterocarpus marsupium (also known as the Indian Kino Tree), a species of dry and moist deciduous trees that grew abundantly in the region.
On 11 December 2005, the Government of Karnataka announced that it had accepted a proposal by Jnanpith Award winner U. R. Ananthamurthy to rename Bangalore to Bengalūru. On 27 September 2006, the Bruhat Bengaluru Mahanagara Palike (BBMP) passed a resolution to implement the proposed name change. The government of Karnataka accepted the proposal, and it was decided to officially implement the name change from 1 November 2006. The Union government approved this request, along with name changes for 11 other Karnataka cities, in October 2014. Hence, Bangalore was renamed to "Bengaluru" on 1 November 2014.
History
Early and medieval history
A discovery of Stone Age artefacts during the 2001 census of India at Jalahalli, Sidhapura and Jadigenahalli, all of which are located on Bangalore's outskirts today, suggest probable human settlement around 4000 BCE. Around 1,000 BCE (Iron Age), burial grounds were established at Koramangala and Chikkajala on the outskirts of Bangalore. Coins of the Roman emperors Augustus, Tiberius, and Claudius found at Yeswanthpur and HAL indicate that the region was involved in trans-oceanic trade with the Romans and other civilisations in 27 BCE.
The region of modern-day Bangalore was part of several successive South Indian kingdoms. Between the fourth and the tenth centuries, the Bangalore region was ruled by the Western Ganga dynasty of Karnataka, the first dynasty to set up effective control over the region. According to Edgar Thurston there were twenty-eight kings who ruled Gangavadi from the start of the Christian era until its conquest by the Cholas. These kings belonged to two distinct dynasties: the earlier line of the Solar race which had a succession of seven kings of the Ratti or Reddi tribe, and the later line of the Ganga race. The Western Gangas ruled the region initially as a sovereign power (350–550), and later as feudatories of the Chalukyas of Badami, followed by the Rashtrakutas until the tenth century. The Begur Nageshwara Temple was commissioned around 860, during the reign of the Western Ganga King Ereganga Nitimarga I and extended by his successor Nitimarga II. Around 1004, during the reign of Raja Raja Chola I, the Cholas defeated the Western Gangas under the command of the crown prince Rajendra Chola I, and captured Bangalore. During this period, the Bangalore region witnessed the migration of many groups — warriors, administrators, traders, artisans, pastorals, cultivators, and religious personnel from Tamil Nadu and other Kannada-speaking regions. The Chokkanathaswamy temple at Domlur, the Aigandapura complex near Hesaraghatta, Mukthi Natheshwara Temple at Binnamangala, Choleshwara Temple at Begur, Someshwara Temple at Ulsoor, date from the Chola era.
In 1117, the Hoysala king Vishnuvardhana defeated the Cholas in the Battle of Talakad in south Karnataka, and extended its rule over the region. Vishnuvardhana expelled the Cholas from all parts of Mysore state. By the end of the 13th century, Bangalore became a source of contention between two warring cousins, the Hoysala ruler Veera Ballala III of Halebidu and Ramanatha, who administered from the Hoysala held territory in Tamil Nadu. Veera Ballala III had appointed a civic head at Hudi (now within Bangalore Municipal Corporation limits), thus promoting the village to the status of a town. After Veera Ballala III's death in 1343, the next empire to rule the region was the Vijayanagara Empire, which itself saw the rise of four dynasties, the Sangamas (1336–1485), the Saluvas (1485–1491), the Tuluvas (1491–1565), and the Aravidu (1565–1646). During the reign of the Vijayanagara Empire, Achyuta Deva Raya of the Tuluva dynasty raised the Shivasamudra Dam across the Arkavati river at Hesaraghatta, whose reservoir is the present city's supply of regular piped water.
Foundation and early modern history
Modern Bangalore was begun in 1537 by a vassal of the Vijayanagara Empire, Kempe Gowda I, who aligned with the Vijayanagara empire to campaign against Gangaraja (whom he defeated and expelled to Kanchi), and who built a mud-brick fort for the people at the site that would become the central part of modern Bangalore. Kempe Gowda was restricted by rules made by Achuta Deva Raya, who feared the potential power of Kempe Gowda and did not allow a formidable stone fort. Kempe Gowda referred to the new town as his "gandubhūmi" or "Land of Heroes". Within the fort, the town was divided into smaller divisions—each called a "pete" (). The town had two main streets—Chikkapeté Street, which ran east–west, and Doddapeté Street, which ran north–south. Their intersection formed the Doddapeté Square—the heart of Bangalore. Kempe Gowda I's successor, Kempe Gowda II, built four towers that marked Bangalore's boundary. During the Vijayanagara rule, many saints and poets referred to Bangalore as "Devarāyanagara" and "Kalyānapura" or "Kalyānapuri" ("Auspicious City").
After the fall of the Vijayanagara Empire in 1565 in the Battle of Talikota, Bangalore's rule changed hands several times. Kempe Gowda declared independence, then in 1638, a large Adil Shahi Bijapur army led by Ranadulla Khan and accompanied by his second in command Shāhji Bhōnslé defeated Kempe Gowda III, and Bangalore was given to Shāhji as a jagir (feudal estate). In 1687, the Mughal general Kasim Khan, under orders from Aurangzeb, defeated Ekoji I, son of Shāhji, and sold Bangalore to Chikkadevaraja Wodeyar (1673–1704), the then ruler of the Kingdom of Mysore for three lakh rupees. After the death of Krishnaraja Wodeyar II in 1759, Hyder Ali, Commander-in-Chief of the Mysore Army, proclaimed himself the de facto ruler of the Kingdom of Mysore. Hyder Ali is credited with building the Delhi and Mysore gates at the northern and southern ends of the city in 1760. The kingdom later passed to Hyder Ali's son Tipu Sultan. Hyder and Tipu contributed towards the beautification of the city by building Lal Bagh Botanical Gardens in 1760. Under them, Bangalore developed into a commercial and military centre of strategic importance.
The Bangalore fort was captured by the British armies under Lord Cornwallis on 21 March 1791 during the Third Anglo-Mysore War and formed a centre for British resistance against Tipu Sultan. Following Tipu's death in the Fourth Anglo-Mysore War (1799), the British returned administrative control of the Bangalore "pētē" to the Maharaja of Mysore and was incorporated into the Princely State of Mysore, which existed as a nominally sovereign entity of the British Raj. The old city ("pētē") developed in the dominions of the Maharaja of Mysore. The Residency of Mysore State was first established in Mysore City in 1799 and later shifted to Bangalore in 1804. It was abolished in 1843 only to be revived in 1881 at Bangalore and to be closed down permanently in 1947, with Indian independence. The British found Bangalore to be a pleasant and appropriate place to station their garrison and therefore moved their cantonment to Bangalore from Seringapatam in 1809 near Ulsoor, about northeast of the city. A town grew up around the cantonment, by absorbing several villages in the area. The new centre had its own municipal and administrative apparatus, though technically it was a British enclave within the territory of the Wodeyar Kings of the Princely State of Mysore. Two important developments which contributed to the rapid growth of the city, include the introduction of telegraph connections to all major Indian cities in 1853 and a rail connection to Madras (now Chennai), in 1864.
Later modern and contemporary history
In the 19th century, Bangalore essentially became a twin city, with the "pētē", whose residents were predominantly Kannadigas and the cantonment created by the British. Throughout the 19th century, the Cantonment gradually expanded and acquired a distinct cultural and political salience as it was governed directly by the British and was known as the Civil and Military Station of Bangalore. While it remained in the princely territory of Mysore, Cantonment had a large military presence and a cosmopolitan civilian population that came from outside the princely state of Mysore, including British and Anglo-Indians army officers.
Bangalore was hit by a plague epidemic in 1898 that claimed nearly 3,500 lives. The crisis caused by the outbreak catalysed the city's sanitation process. Telephone lines were laid to help co-ordinate anti-plague operations. Regulations for building new houses with proper sanitation facilities came into effect. A health officer was appointed and the city divided into four wards for better co-ordination. Victoria Hospital was inaugurated in 1900 by Lord Curzon, the then Governor-General of British India. New extensions in Malleswaram and Basavanagudi were developed in the north and south of the pētē. In 1903, motor vehicles came to be introduced in Bangalore. In 1906, Bangalore became one of the first cities in India to have electricity from hydro power, powered by the hydroelectric plant situated in Shivanasamudra. The Indian Institute of Science was established in 1909, which subsequently played a major role in developing the city as a science research hub. In 1912, the Bangalore torpedo, an offensive explosive weapon widely used in World War I and World War II, was devised in Bangalore by British army officer Captain McClintock of the Madras Sappers and Miners.
Bangalore's reputation as the "Garden City of India" began in 1927 with the silver jubilee celebrations of the rule of Krishnaraja Wodeyar IV. Several projects such as the construction of parks, public buildings and hospitals were instituted to improve the city. Bangalore played an important role during the Indian independence movement. Mahatma Gandhi visited the city in 1927 and 1934 and addressed public meetings here. In 1926, the labour unrest in Binny Mills due to demand by textile workers for payment of bonus resulted in lathi charging and police firing, resulting in the death of four workers, and several injuries. In July 1928, there were notable communal disturbances in Bangalore, when a Ganesh idol was removed from a school compound in the Sultanpet area of Bangalore. In 1940, the first flight between Bangalore and Bombay took off, which placed the city on India's urban map.
After India's independence in August 1947, Bangalore remained in the newly carved Mysore State of which the Maharaja of Mysore was the Rajapramukh (appointed governor). The "City Improvement Trust" was formed in 1945, and in 1949, the "City" and the "Cantonment" merged to form the Bangalore City Corporation. The government of Karnataka later constituted the Bangalore Development Authority in 1976 to co-ordinate the activities of these two bodies. Public sector employment and education provided opportunities for Kannadigas from the rest of the state to migrate to the city. Bangalore experienced rapid growth in the decades 1941–51 and 1971–81, which saw the arrival of many immigrants from northern Karnataka. By 1961, Bangalore had become the sixth largest city in India, with a population of 1,207,000. In the decades that followed, Bangalore's manufacturing base continued to expand with the establishment of private companies such as MICO (Motor Industries Company), which set up its manufacturing plant in the city.
By the 1980s, it was clear that urbanisation had spilled over the current boundaries, and in 1986, the Bangalore Metropolitan Region Development Authority, was established to co-ordinate the development of the entire region as a single unit. On 8 February 1981, a major fire broke out at Venus Circus in Bangalore, where more than 92 lives were lost, the majority of them being children. Bangalore experienced a growth in its real estate market in the 1980s and 1990s, spurred by capital investors from other parts of the country who converted Bangalore's large plots and colonial bungalows into multi-storied apartments. In 1985, Texas Instruments became the first multinational corporation to set up base in Bangalore. Other information technology companies followed suit and by the end of the 20th century, Bangalore had established itself as the Silicon Valley of India. Today, Bangalore is India's third most populous city. During the 21st century, Bangalore has suffered major terrorist attacks in 2008, 2010, and 2013.
Geography
Bangalore lies in the southeast of the South Indian state of Karnataka. It is in the heart of the Mysore Plateau (a region of the larger Cretaceous Deccan Plateau) at an average elevation of . It is located at and covers an area of . The majority of the city of Bangalore lies in the Bangalore Urban district of Karnataka and the surrounding rural areas are a part of the Bangalore Rural district. The Government of Karnataka has carved out the new district of Ramanagara from the old Bangalore Rural district.
The topology of Bangalore is generally flat, though the western parts of the city are hilly. The highest point is Vidyaranyapura Doddabettahalli, which is and is situated to the north-west of the city. No major rivers run through the city, although the Arkavathi and South Pennar cross paths at the Nandi Hills, to the north. River Vrishabhavathi, a minor tributary of the Arkavathi, arises within the city at Basavanagudi and flows through the city. The rivers Arkavathi and Vrishabhavathi together carry much of Bangalore's sewage. A sewerage system, constructed in 1922, covers of the city and connects with five sewage treatment centres located in the periphery of Bangalore.
In the 16th century, Kempe Gowda I constructed many lakes to meet the town's water requirements. The Kempambudhi Kere, since overrun by modern development, was prominent among those lakes. In the earlier half of 20th century, the Nandi Hills waterworks was commissioned by Sir Mirza Ismail (Diwan of Mysore, 1926–41 CE) to provide a water supply to the city. The river Kaveri provides around 80% of the total water supply to the city with the remaining 20% being obtained from the Thippagondanahalli and Hesaraghatta reservoirs of the Arkavathi river. Bangalore receives 800 million litres (211 million US gallons) of water a day, more than any other Indian city. However, Bangalore sometimes does face water shortages, especially during summer- more so in the years of low rainfall. A random sampling study of the air quality index (AQI) of twenty stations within the city indicated scores that ranged from 76 to 314, suggesting heavy to severe air pollution around areas of traffic concentration.
Bangalore has a handful of freshwater lakes and water tanks, the largest of which are Madivala tank, Hebbal Lake, Ulsoor Lake, Yediyur Lake and Sankey Tank. However, recently many lakes have been polluted, decreasing the quality of the water. The Government is making revival and conservation efforts. Groundwater occurs in silty to sandy layers of the alluvial sediments. The Peninsular Gneissic Complex (PGC) is the most dominant rock unit in the area and includes granites, gneisses and migmatites, while the soils of Bangalore consist of red laterite and red, fine loamy to clayey soils.
Vegetation in the city is primarily in the form of large deciduous canopy and minority coconut trees. Though Bangalore has been classified as a part of the seismic zone II (a stable zone), it has experienced quakes of magnitude as high as 4.5 on the Richter scale.
Climate
Bangalore has a tropical savanna climate (Köppen climate classification Aw) with distinct wet and dry seasons. Due to its high elevation, Bangalore usually enjoys a more moderate climate throughout the year, although occasional heat waves can make summer somewhat uncomfortable. The coolest month is January with an average low temperature of and the hottest month is April with an average high temperature of . The highest temperature ever recorded in Bangalore is (recorded on 24 April 2016) as there was a strong El Niño in 2016. There were also unofficial records of on that day. The lowest ever recorded is in January 1884. Winter temperatures rarely drop below , and summer temperatures seldom exceed . Bangalore receives rainfall from both the northeast and the southwest monsoons and the wettest months are September, October and August, in that order. The summer heat is moderated by fairly frequent thunderstorms, which occasionally cause power outages and local flooding. Most of the rainfall occurs during late afternoon/evening or night and rain before noon is infrequent. November 2015 (290.4 mm) was recorded as one of the wettest months in Bangalore with heavy rains causing severe flooding in some areas, and closure of a number of organisations for over a couple of days. The heaviest rainfall recorded in a 24-hour period is recorded on 1 October 1997.
Demographics
Bangalore is a megacity with a population of 8,443,675 in the city and 10,456,000 in the urban agglomeration, up from 8.5 million at the 2011 census. This makes it the third-most-populous city in India and the 18th-most-populous city in the world. Bangalore was the fastest-growing Indian metropolis after New Delhi between 1991 and 2001, with a growth rate of 38% during the decade. Residents of Bangalore are referred to as "Bangaloreans" in English, Bengaloorinavaru or Bengaloorigaru in Kannada and Banglori in Hindi or Urdu. People from other states have migrated to Bangalore, study, or work there as well.
census of India, 78.9% of Bangalore's population is Hindu, a little less than the national average. Muslims comprise 13.9% of the population, roughly the same as their national average. Christians and Jains account for 5.6% and 1.0% of the population, respectively, double that of their national averages. The city has a literacy rate of 89%. Roughly 10% of Bangalore's population lives in slums.—a relatively low proportion when compared to other cities in the developing world such as Mumbai (50%) and Nairobi (60%). The 2008 National Crime Records Bureau statistics indicate that Bangalore accounts for 8.5% of the total crimes reported from 35 major cities in India which is an increase in the crime rate when compared to the number of crimes fifteen years ago.
Bangalore suffers from the same major urbanisation problems seen in many fast-growing cities in developing countries: rapidly escalating social inequality, mass displacement and dispossession, proliferation of slum settlements, and epidemic public health crisis due to severe water shortage and sewage problems in poor and working-class neighbourhoods.
Language
The official language of Bangalore is Kannada which is spoken by 44.5% of the population. The second-largest language is Tamil, spoken by 15.0% of the population. 14% speak Telugu, 12% Urdu, 6% Hindi, 3% Malayalam and 2.07% Marathi as their first language. The Kannada language spoken in Bangalore is a form called as 'Old Mysuru Kannada' which is also used in most of the southern part of Karnataka. A vernacular dialect of this, known as Bangalore Kannada, is spoken among the youth in Bangalore and the adjoining Mysore regions. English is extensively spoken and is the principal language of the professional and business class.
The major communities of Bangalore who share a long history in the city other than the Kannadigas are the Telugus and Tamilians who migrated to Bangalore in search of a better livelihood, and Dakhanis. Already in the 16th century, Bangalore had few speakers of Tamil and Telugu, who spoke Kannada to carry out low profile jobs. However the Telugu Speaking Morasu Vokkaligas are the native people of Bangalore Telugu-speaking people initially came to Bangalore on invitation by the Mysore royalty (a few of them have lineage dating back to Krishnadevaraya).
Other native communities are the Tuluvas and the Konkanis of coastal Karnataka, the Kodavas of the Kodagu district of Karnataka. The migrant communities are Maharashtrians, Punjabis, Rajasthanis, Gujaratis, Tamilians, Telugus, Malayalis, Odias, Sindhis, Biharis, Jharkhandis and Bengalis. Bangalore once had a large Anglo-Indian population, the second largest after Calcutta. Today, there are around 10,000 Anglo-Indians in Bangalore. Bangalorean Christians include Tamil Christians, Mangaloreaon Catholics, Kannadiga Christians, Malayali Syrian Christians and Northeast Indian Christians. Muslims form a very diverse population, consisting of Dakhini and Urdu-speaking Muslims, Kutchi Memons, Labbay and Mappilas.
Other languages with large numbers of speakers include Konkani, Bengali, Marwari, Tulu, Odia, Gujarati, Kodagu, Punjabi, Lambadi, Sindhi and Nepali. As in the rest of the state, Kannada is the most widely spoken language, but English is a commonly spoken second language in the city.
Civic administration
Management
The Bruhat Bengaluru Mahanagara Palike (BBMP, Greater Bangalore Municipal Corporation) is in charge of the civic administration of the city. It was formed in 2007 by merging 100 wards of the erstwhile Bangalore Mahanagara Palike, with seven neighbouring City Municipal Councils, one Town Municipal Council and 110 villages around Bangalore. The number of wards increased to 198 in 2009. The BBMP is run by a city council composed of 250 members, including 198 corporators representing each of the wards of the city and 52 other elected representatives, consisting of members of Parliament and the state legislature. Elections to the council are held once every five years, with results being decided by popular vote. Members contesting elections to the council usually represent one or more of the state's political parties. A mayor and deputy mayor are also elected from among the elected members of the council. Elections to the BBMP were held on 28 March 2010, after a gap of three and a half years since the expiry of the previous elected body's term, and the Bharatiya Janata Party was voted into power – the first time it had ever won a civic poll in the city. Indian National Congress councillor Sampath Raj became the city's mayor in September 2017, the vote having been boycotted by the BJP. In September 2018, Indian National Congress councillor Gangambike Mallikarjun was elected as the mayor of Bangalore and took charge from the outgoing mayor, Sampath Raj. In 2019 BJP’s M Goutham Kumar took charge as mayor. On 10 September 2020, the term of the BBMP council ended and Gaurav Gupta was appointed as the administrator of BBMP.
Bangalore's rapid growth has created several problems relating to traffic congestion and infrastructural obsolescence that the Bangalore Mahanagara Palike has found challenging to address. The unplanned nature of growth in the city resulted in massive traffic gridlocks that the municipality attempted to ease by constructing a flyover system and by imposing one-way traffic systems. Some of the flyovers and one-ways mitigated the traffic situation moderately but were unable to adequately address the disproportionate growth of city traffic. A 2003 Battelle Environmental Evaluation System (BEES) evaluation of Bangalore's physical, biological and socioeconomic parameters indicated that Bangalore's water quality and terrestrial and aquatic ecosystems were close to ideal, while the city's socioeconomic parameters (traffic, quality of life) air quality and noise pollution scored poorly. The BBMP works in conjunction with the Bangalore Development Authority (BDA) and the Agenda for Bangalore's Infrastructure and Development Task Force (ABIDe) to design and implement civic and infrastructural projects.
The Bangalore City Police (BCP) has seven geographic zones, includes the Traffic Police, the City Armed Reserve, the Central Crime Branch and the City Crime Record Bureau and runs 86 police stations, including two all-women police stations. Other units within the BCP include Traffic Police, City Armed Reserve (CAR), City Special Branch (CSB), City Crime Branch (CCB) and City Crime Records Bureau (CCRB). As capital of the state of Karnataka, Bangalore houses important state government facilities such as the Karnataka High Court, the Vidhana Soudha (the home of the Karnataka state legislature) and Raj Bhavan (the residence of the governor of Karnataka). Bangalore contributes four members to the lower house of the Indian Parliament, the Lok Sabha, from its four constituencies: Bangalore Rural, Bangalore Central, Bangalore North, and Bangalore South, and 28 members to the Karnataka Legislative Assembly.
Electricity in Bangalore is regulated through the Bangalore Electricity Supply Company (BESCOM), while water supply and sanitation facilities are provided by the Bangalore Water Supply and Sewerage Board (BWSSB).
The city has offices of the Consulate General of Germany, France, Japan, Israel, British Deputy High Commission, along with honorary consulates of Ireland, Finland, Switzerland, Maldives, Mongolia, Sri Lanka and Peru. It also has a trade office of Canada and a virtual Consulate of the United States.
Pollution control
Bangalore generates about 3,000 tonnes of solid waste per day, of which about 1,139 tonnes are collected and sent to composting units such as the Karnataka Composting Development Corporation. The remaining solid waste collected by the municipality is dumped in open spaces or on roadsides outside the city. In 2008, Bangalore produced around 2,500 metric tonnes of solid waste, and increased to 5000 metric tonnes in 2012, which is transported from collection units located near Hesaraghatta Lake, to the garbage dumping sites. The city suffers significantly with dust pollution, hazardous waste disposal, and disorganised, unscientific waste retrievals. The IT hub, the Whitefield region, is the most polluted area in Bangalore. Recently a study found that over 36% of diesel vehicles in the city exceed the national limit for emissions.
Anil Kumar, Commissioner Bruhat Bengaluru Mahanagara Palike BBMP, said: "The deteriorating air quality in cities and its impact on public health is an area of growing concern for city authorities. While much is already being done about collecting and monitoring air quality data, little focus has been given on managing the impacts that bad air quality is having on the health of citizens."
Slums
report submitted to the World Bank by Karnataka Slum Clearance Board, Bangalore had 862 slums from a total of around 2000 slums in Karnataka. The families living in the slum were not ready to move into the temporary shelters. 42% of the households migrated from different parts of India like Chennai, Hyderabad and most of North India, and 43% of the households had remained in the slums for over 10 years. The Karnataka Municipality works to shift 300 families annually to newly constructed buildings. One-third of these slum clearance projects lacked basic service connections, 60% of slum dwellers lacked complete water supply lines and shared BWSSB water supply.
Waste management
Ιn 2012, Bangalore generated 2.1 million tonnes of Municipal Solid Waste (195.4 kg/cap/yr). The waste management scenario in the state of Karnataka is regulated by the Karnataka State Pollution Control Board (KSPCB) under the aegis of the Central Pollution Control Board (CPCB) which is a Central Government entity. As part of their Waste Management Guidelines the government of Karnataka through the Karnataka State Pollution Control Board (KSPCB) has authorised a few well-established companies to manage the biomedical waste and hazardous waste in the state of Karnataka.
Economy
Bangalore is second fastest growing metropolis in India. Bangalore contributes 38% of India's total IT exports. It's economy is primarily service oriented and industrialized. The economy of Bangalore is contributed by information technology, telecommunication, biotechnology, manufacturing and industries (electronics, machinery, electricals, automobiles, foods & beverages) sectors etc. Major industrial areas around Bangalore are Adugodi, Bidadi, Bommanahalli, Bommasandra, Domlur, Hoodi, Whitefield, Doddaballapura, Hoskote, Bashettihalli, Yelahanka, Electronic City, Peenya, Krishnarajapuram, Bellandur, Narasapura, Rajajinagar, Mahadevapura etc. Bangalore is one of the favorable business destinations. It is fifth city in India to host maximum numbers of Fortune Companies next to Mumbai, Delhi, Kolkata and Chennai.
The growth of IT has presented the city with unique challenges. Ideological clashes sometimes occur between the city's IT moguls, who demand an improvement in the city's infrastructure, and the state government, whose electoral base is primarily the people in rural Karnataka. The encouragement of high-tech industry in Bangalore, for example, has not favoured local employment development, but has instead increased land values and forced out small enterprise. The state has also resisted the massive investments required to reverse the rapid decline in city transport which has already begun to drive new and expanding businesses to other centres across India. Bangalore is a hub for biotechnology related industry in India and in the year 2005, around 47% of the 265 biotechnology companies in India were located here; including Biocon, India's largest biotechnology company. With an economic growth of 10.3%, Bangalore is the second fastest-growing major metropolis in India, and is also the country's fourth largest fast-moving consumer goods (FMCG) market. Forbes considers Bangalore one of "The Next Decade's Fastest-Growing Cities". The city is the third largest hub for high-net-worth individuals and is home to over 10,000-dollar millionaires and about 60,000 super-rich people who have an investment surplus of and respectively.
The city is widely regarded as Silicon Valley of Asia, since Bangalore has been the largest IT hub. Infosys, Wipro, Mindtree, Mphasis, Flipkart, Myntra are headquartered in Bangalore. A large number of information technology companies located in the city which contributed 33% of India's ₹1,442 billion (US$20 billion) IT exports in 2006–07. Bangalore's IT industry is divided into three main clusters – Software Technology Parks of India (STPI); International Tech Park, Bangalore (ITPB); and Electronic City. Most of the IT companies are located in Bommanahalli, Domlur, Whitefield, Electronic City, Krishnarajapuram, Bellandur, Mahadevapura. The city turned out as IT hub due to the presence of many institutions like Bangalore University, Indian Institute of Science etc. Bangalore is also known as Biotech Capital of India as it hosts as the headquarters of India's largest biotechnology company Biocon. Startup companies such as Swiggy, Ola Cabs, InMobi, Quickr, RedBus are also based in the city.
Bangalore is a favorable destination for industrial developments. United Breweries Group is headquartered in Bangalore. The city is an automobile hub. Tata Hitachi Construction Machinery, Mahindra Electric, Bharat Earth Movers, Toyota Kirloskar Motor, Tesla India, Ather Energy are headquartered in Bangalore within there operations. Robert Bosch GmbH, Mercedes-Benz, Volvo, General Motors, Royal Enfield, Honda Motorcycle and Scooter India, Scania AB, Larsen & Toubro have their plants and research & development (R&D) centers around Bangalore. ABB, General Electric, Tyco International have their research & development centers in Bangalore. Aerospace industries are also popular around Bangalore, which made it as Aviation Monopoly capital of India. Airbus, Boeing, Tata Advanced Systems, Indian Space Research Organisation, Liebherr Aerospace have there units in Bangalore. Bangalore is also emerged as an electronics & hardware manufacturing hub of India. It houses Dell, Nokia, Philips, Wistron manufacturing and R&D units. Public sector undertakings (PSUs) such as Bharat Electronics Limited (BEL), Hindustan Aeronautics Limited (HAL), National Aerospace Laboratories (NAL), Bharat Earth Movers Limited (BEML), Central Manufacturing Technology Institute (CMTI), HMT (formerly Hindustan Machine Tools) and Rail Wheel Factory (RWF). SKF have a plant in Bangalore.
Transport
Air
Bangalore is served by Kempegowda International Airport , located at Devanahalli, about from the city centre. It was formerly called Bangalore International Airport. The airport started operations from 24 May 2008 and is a private airport managed by a consortium led by the GVK Group. The city was earlier served by the HAL Airport at Vimanapura, a residential locality in the eastern part of the city. The airport is third-busiest in India after Delhi and Mumbai in terms of passenger traffic and the number of air traffic movements (ATMs). Taxis and air conditioned Volvo buses operated by BMTC connect the airport with the city.
Namma Metro (Rail)
A rapid transit system called the Namma Metro is being built in stages. Initially opened with the stretch from Baiyappanahalli to MG Road in 2011, phase 1 covering a distance of for the north–south and east–west lines was made operational in June 2017. Phase 2 of the metro covering is under construction and includes two new lines along with the extension of the existing north–south and east–west lines. There are also plans to extend the north–south line to the airport, covering a distance of . It is expected to be operational by 2021. Bangalore is a divisional headquarters in the South Western Railway zone of the Indian Railways. There are four major railway stations in the city: Krantiveera Sangolli Rayanna Railway Station, Bangalore Cantonment railway station, Yeshwantapur Junction and Krishnarajapuram railway station, with railway lines towards Jolarpettai in the east, Guntakal in the north, Kadapa (only operational till Kolar) in the northeast, Tumkur in the northwest, Hassan and Mangalore in the west, Mysore in the southwest and Salem in the south. There is also a railway line from Baiyappanahalli to Vimanapura which is no longer in use. Though Bangalore has no commuter rail at present, there have been demands for a suburban rail service keeping in mind the large number of employees working in the IT corridor areas of Whitefield, Outer Ring Road and Electronics City. The Rail Wheel Factory is Asia's second-largest manufacturer of wheel and axle for railways and is headquartered in Yelahanka, Bangalore.
Road
Buses operated by Bangalore Metropolitan Transport Corporation (BMTC) are an important and reliable means of public transport available in the city. While commuters can buy tickets on boarding these buses, BMTC also provides an option of a bus pass to frequent users. BMTC runs air-conditioned luxury buses on major routes, and also operates shuttle services from various parts of the city to Kempegowda International Airport. The BMTC also has a mobile app that provides real-time location of a bus using the global positioning system of the user's mobile device. The Karnataka State Road Transport Corporation operates 6,918 buses on 6,352 schedules, connecting Bangalore with other parts of Karnataka as well as other neighbouring states. The main bus depots that KSRTC maintains are the Kempegowda Bus Station, locally known as "Majestic bus stand", where most of the out station buses ply from. Some of the KSRTC buses to Tamil Nadu, Telangana and Andhra Pradesh ply from Shantinagar Bus Station, Satellite Bus Station at Mysore Road and Baiyappanahalli satellite bus station. BMTC and KSRTC were the first operators in India to introduce Volvo city buses and intracity coaches in India. Three-wheeled, yellow and black or yellow and green auto-rickshaws, referred to as autos, are a popular form of transport. They are metered and can accommodate up to three passengers. Taxis, commonly called City Taxis, are usually available, too, but they are only available on call or by online services. Taxis are metered and are generally more expensive than auto-rickshaws.
An average of 1,250 vehicles are being registered daily in Bangalore RTOs. The total number of vehicles as of 2020 are around 85 lakh vehicles, with a road length of .
Culture
Bangalore is known as the "Garden City of India" because of its greenery, broad streets and the presence of many public parks, such as Lal Bagh and Cubbon Park. Bangalore is sometimes called as the "Pub Capital of India" and the "Rock/Metal Capital of India" because of its underground music scene and it is one of the premier places to hold international rock concerts. In May 2012, Lonely Planet ranked Bangalore third among the world's top ten cities to visit.
Bangalore is also home to many vegan-friendly restaurants and vegan activism groups, and has been named as India's most vegan-friendly city by PETA India.
Biannual flower shows are held at the Lal Bagh Gardens during the week of Republic Day (26 January) and Independence Day (15 August). Bangalore Karaga or "Karaga Shaktyotsava" is one of the most important and oldest festivals of Bangalore dedicated to the Hindu Goddess Draupadi. It is celebrated annually by the Thigala community, over a period of nine days in the month of March or April. The Someshwara Car festival is an annual procession of the idol of the Halasuru Someshwara Temple (Ulsoor) led by the Vokkaligas, a major land holding community in the southern Karnataka, occurring in April. Karnataka Rajyotsava is widely celebrated on 1 November and is a public holiday in the city, to mark the formation of Karnataka state on 1 November 1956. Other popular festivals in Bangalore are Ugadi, Ram Navami, Eid ul-Fitr, Ganesh Chaturthi, St. Mary's feast, Dasara, Deepawali and Christmas.
The diversity of cuisine is reflective of the social and economic diversity of Bangalore. Bangalore has a wide and varied mix of restaurant types and cuisines and Bangaloreans deem eating out as an intrinsic part of their culture. Roadside vendors, tea stalls, and South Indian, North Indian, Chinese and Western fast food are all very popular in the city. Udupi restaurants are very popular and serve predominantly vegetarian, regional cuisine.
Art and literature
Bangalore did not have an effective contemporary art representation, as compared to Delhi and Mumbai, until recently during the 1990s, several art galleries sprang up, notable being the government established National Gallery of Modern Art. Bangalore's international art festival, Art Bangalore, was established in 2010.
Kannada literature appears to have flourished in Bangalore even before Kempe Gowda laid the foundations of the city. During the 18th and 19th centuries, Kannada literature was enriched by the Vachanas (a form of rhythmic writing) composed by the heads of the Veerashaiva Mathas (monastery) in Bangalore. As a cosmopolitan city, Bangalore has also encouraged the growth of Telugu, Urdu, and English literatures. The headquarters of the Kannada Sahitya Parishat, a nonprofit organisation that promotes the Kannada language, is located in Bangalore. The city has its own literary festival, known as the "Bangalore Literature Festival", which was inaugurated in 2012.
Indian Cartoon Gallery
The cartoon gallery is located in the heart of Bangalore, dedicated to the art of cartooning, is the first of its kind in India. Every month the gallery is conducting fresh cartoon exhibition of various professional as well as amateur cartoonist. The gallery has been organised by the Indian Institute of Cartoonists based in Bangalore that serves to promote and preserve the work of eminent cartoonists in India. The institute has organised more than one hundred exhibitions of cartoons.
Theatre, music, and dance
Bangalore is home to the Kannada film industry, which produces about 80 Kannada feature films each year. Bangalore also has a very active and vibrant theatre culture with popular theatres being Ravindra Kalakshetra and the Ranga Shankara The city has a vibrant English and foreign language theatre scene with places like Ranga Shankara and Chowdiah Memorial Hall leading the way in hosting performances leading to the establishment of the Amateur film industry.
Kannada theatre is very popular in Bangalore, and consists mostly of political satire and light comedy. Plays are organised mostly by community organisations, but there are some amateur groups which stage plays in Kannada. Drama companies touring India under the auspices of the British Council and Max Müller Bhavan also stage performances in the city frequently. The Alliance Française de Bangalore also hosts numerous plays through the year.
Bangalore is also a major centre of Indian classical music and dance. The cultural scene is very diverse due to Bangalore's mixed ethnic groups, which is reflected in its music concerts, dance performances and plays. Performances of Carnatic (South Indian) and Hindustani (North Indian) classical music, and dance forms like Bharat Natyam, Kuchipudi, Kathakali, Kathak, and Odissi are very popular. Yakshagana, a theatre art indigenous to coastal Karnataka is often played in town halls. The two main music seasons in Bangalore are in April–May during the Ram Navami festival, and in September–October during the Dusshera festival, when music activities by cultural organisations are at their peak. Though both classical and contemporary music are played in Bangalore, the dominant music genre in urban Bangalore is rock music. Bangalore has its own subgenre of music, "Bangalore Rock", which is an amalgamation of classic rock, hard rock and heavy metal, with a bit of jazz and blues in it. Notable bands from Bangalore include Raghu Dixit Project, Kryptos, Inner Sanctum, Agam, All the fat children, and Swaratma.
The city hosted the Miss World 1996 beauty pageant.
Education
Schools
Until the early 19th century, education in Bangalore was mainly run by religious leaders and restricted to students of that religion. The western system of education was introduced during the rule of Mummadi Krishnaraja Wodeyar. Subsequently, the British Wesleyan Mission established the first English school in 1832 known as Wesleyan Canarese School. The fathers of the Paris Foreign Missions established the St. Joseph's European School in 1858. The Bangalore High School was started by the Mysore government in 1858 and the Bishop Cotton Boys' School was started in 1865. In 1945 when World War II came to an end, King George Royal Indian Military Colleges was started at Bangalore by King George VI; the school is popularly known as Bangalore Military School
In post-independent India, schools for young children (16 months–5 years) are called nursery, kindergarten or play school, which are broadly based on Montessori or multiple intelligence methodology of education. Primary, middle school and secondary education in Bangalore is offered by various schools which are affiliated to one of the government or government recognised private boards of education, such as the Secondary School Leaving Certificate (SSLC), Central Board of Secondary Education (CBSE), Council for the Indian School Certificate Examinations (CISCE), International Baccalaureate (IB), International General Certificate of Secondary Education (IGCSE) and National Institute of Open Schooling (NIOS). Schools in Bangalore are either government run or are private (both aided and un-aided by the government). Bangalore has a significant number of international schools due to expats and IT crowd. After completing their secondary education, students either attend Pre University Course or continue an equivalent high school course in one of three streams – arts, commerce or science with various combinations. Alternatively, students may also enroll in diploma courses. Upon completing the required coursework, students enroll in general or professional degrees in universities through lateral entry.
Below are some of the historical schools in Bangalore and their year of establishment.
St John's High School (1854)
United Mission School (1832)
Goodwill's Girls School (1855)
St. Joseph's Boys' High School (1858)
Bishop Cotton Boys' School (1865)
Bishop Cotton Girls' School (1865)
Cathedral High School (1866)
Baldwin Boys' High School (1880)
Baldwin Girls' High School (1880)
St. Joseph's Indian High School (1904)
St Anthony's Boys' School (1913)
Clarence High School (1914)
National High School (1917)
St. Germain High School (1944)
Bangalore Military School (1946)
Sophia High School (1949)
Universities
The Central College of Bangalore is the oldest college in the city, it was established in the year 1858. It was originally affiliated to University of Mysore and subsequently to Bangalore University. Later in the year 1882 the priests from the Paris Foreign Missions Society established the St. Joseph's College. The Bangalore University was established in 1886, it provides affiliation to over 500 colleges, with a total student enrolment exceeding 300,000. The university has two campuses within Bangalore – Jnanabharathi and Central College. University Visvesvaraya College of Engineering was established in the year 1917, by M. Visvesvaraya, At present, the UVCE is the only engineering college under the Bangalore University. Bangalore also has many private engineering colleges affiliated to Visvesvaraya Technological University.
Some of the professional institutes in Bangalore are:
Bangalore Medical College and Research Institute
Garden City University
Indian Institute of Astrophysics
Indian Institute of Management Bangalore
Indian Institute of Science, which was established in 1909 in Bangalore
Indian Statistical Institute
Institute of Wood Science and Technology
International Institute of Information Technology, Bangalore
Jagdish Sheth School of Management
Jawaharlal Nehru Centre for Advanced Scientific Research
National Centre for Biological Sciences
National Institute of Design
National Institute of Fashion Technology
National Institute of Mental Health and Neurosciences
National Law School of India University
Raman Research Institute
Sri Jayadeva Institute of Cardiovascular Sciences and Research
University of Agricultural Sciences, Bangalore
Some private institutions in Bangalore include Symbiosis International University, SVKM's NMIMS, CMR University, Christ University, Jain University, PES University, Dayananda Sagar University and Ramaiah University of Applied Sciences. Private medical colleges include St. John's Medical College, M. S. Ramaiah Medical College, Kempegowda Institute of Medical Sciences, and Vydehi Institute of Medical Sciences and Research Centre. The M. P. Birla Institute of Fundamental Research has a branch located in Bangalore.
Media
The first printing press in Bangalore was established in 1840 in Kannada by the Wesleyan Christian Mission. In 1859, Bangalore Herald became the first English bi-weekly newspaper to be published in Bangalore and in 1860, Mysore Vrittanta Bodhini became the first Kannada newspaper to be circulated in Bangalore.Vijaya Karnataka and The Times of India are the most widely circulated Kannada and English newspapers in Bangalore respectively, closely followed by the Prajavani and Deccan Herald both owned by the Printers (Mysore) Limited – the largest print media house in Karnataka. Other circulated newspapers are Vijayvani, Vishwavani, Kannadaprabha, Sanjevani, Bangalore Mirror, Udayavani provide localised news updates. On the web, Explocity provides listings information in Bangalore.
Bangalore got its first radio station when All India Radio, the official broadcaster for the Indian Government, started broadcasting from its Bangalore station on 2 November 1955. The radio transmission was AM, until in 2001, Radio City became the first private channel in India to start transmitting FM radio from Bangalore. In recent years, a number of FM channels have started broadcasting from Bangalore. The city probably has India's oldest Amateur (Ham) Radio Club – Bangalore Amateur Radio Club (VU2ARC), which was established in 1959.
Bangalore got its first look at television when Doordarshan established a relay centre here and started relaying programs from 1 November 1981. A production centre was established in the Doordarshan's Bangalore office in 1983, thereby allowing the introduction of a news program in Kannada on 19 November 1983. Doordarshan also launched a Kannada satellite channel on 15 August 1991 which is now named DD Chandana. The advent of private satellite channels in Bangalore started in September 1991 when Star TV started to broadcast its channels. Though the number of satellite TV channels available for viewing in Bangalore has grown over the years, the cable operators play a major role in the availability of these channels, which has led to occasional conflicts. Direct To Home (DTH) services also became available in Bangalore from around 2007.
The first Internet service provider in Bangalore was STPI, which started offering internet services in early 1990s. This Internet service was, however, restricted to corporates until VSNL started offering dial-up internet services to the general public at the end of 1995. Bangalore has the largest number of broadband Internet connections in India.
Namma Wifi is a free municipal wireless network in Bangalore, the first free WiFi in India. It began operations on 24 January 2014. Service is available at M.G. Road, Brigade Road, and other locations. The service is operated by D-VoiS and is paid for by the State Government. Bangalore was the first city in India to have the 4th Generation Network (4G) for Mobile.
Sports
Cricket and football are by far the most popular sports in the city. Bangalore has many parks and gardens that provide excellent pitches for impromptu games. A significant number of national cricketers have come from Bangalore, including former captains Rahul Dravid and Anil Kumble. Some of the other notable players from the city who have represented India include Gundappa Viswanath, Syed Kirmani, E. A. S. Prasanna, B. S. Chandrasekhar, Roger Binny, Venkatesh Prasad, Sunil Joshi, Robin Uthappa, Vinay Kumar, KL Rahul, Karun Nair, Mayank Agarwal, Brijesh Patel and Stuart Binny. Bangalore's international cricket stadium is the M. Chinnaswamy Stadium, which has a seating capacity of 55,000 and has hosted matches during the 1987 Cricket World Cup, 1996 Cricket World Cup and the 2011 Cricket World Cup. The Chinnaswamy Stadium is the home of India's National Cricket Academy.
The Indian Premier League franchise Royal Challengers Bangalore and the Indian Super League club Bengaluru FC are based in the city. It hosted some games of the 2014 Unity World Cup. The I-League 2nd Division clubs FC Bengaluru United, Ozone FC and South United FC are also based in Bangalore.
The city hosts the Women's Tennis Association (WTA) Bangalore Open tournament annually. Beginning September 2008, Bangalore has also been hosting the Kingfisher Airlines Tennis Open ATP tournament annually.
Bangalore is home to the Bangalore rugby football club (BRFC). It has a number of elite clubs, like Century Club, The Bangalore Golf Club, the Bowring Institute and the exclusive Bangalore Club, which counts among its previous members Winston Churchill and the Maharaja of Mysore.
India's Davis Cup team members, Mahesh Bhupathi and Rohan Bopanna reside in Bangalore. Other sports personalities from Bangalore include national swimming champion Nisha Millet, world snooker champion Pankaj Advani and former All England Open badminton champion Prakash Padukone.
Bangalore's Kanteerava Indoor Stadium hosted the SABA Championship in 2015 and 2016. India's national basketball team won the gold medal on both occasions. Bangalore is home to Bengaluru Beast, 2017 vice champion of India's top professional basketball division, the UBA Pro Basketball League.
The Kanteerava Indoor Stadium and Sheraton Grand has hosted various kabaddi matches, including the entire Pro Kabaddi League Season 8. Bengaluru Bulls is one of the teams in this league.
Sister cities
Minsk, Belarus (1973)
Cleveland, Ohio, United States (1992)
San Francisco, California, United States (2008)
Chengdu, Sichuan, China (2013)
See also
List of people from Bangalore
List of neighbourhoods in Bangalore
List of tallest buildings in Bangalore
List of tourist attractions in Bangalore
List of Chola temples in Bangalore
Taluks of Bangalore
Tourism in Karnataka
References
Works cited
Further reading
. Digital Libraries and Archives. 2006. Virginia Tech. 27 April 2004.
External links
Bruhat Bengaluru Mahanagara Palike
Official website of Bangalore Development Authority
"Bengaluru"—Encyclopædia Britannica entry
1537 establishments in India
Cities and towns in Bangalore Urban district
Cities in Karnataka
High-technology business districts in India
Indian capital cities
Metropolitan cities in India
Populated places established in 1537
Geographical articles missing image alternative text
|
1636344
|
https://en.wikipedia.org/wiki/Workspot
|
Workspot
|
Workspot was the first Linux desktop Web Service, i.e. it provided Open Source personal computing without computer ownership. Founded by Greg Bryant, Gal Cohen, Kathy Giori, Curt Brune, Benny Soetarman, Bruce Robertson, and Asao Kamei, in 1999, it was the first application service to make use of Virtual Network Computing. Workspot also hosted a free Linux Desktop demo using VNC: 'one-click to Linux' It eventually began to charge for a remote, web-accessible, persistent desktop, and several desktop collaboration features. Workspot won Linux Journal's Best Web Application award for 2000. Badly hit by the dotcom crash, it ceased activity by 2005.
Workspot was based in downtown Palo Alto, California during the dotcom boom, and funded its free desktop service through wireless contracting: they may have been the first mobile web app shop, involved in creating the first mobile apps for Google, eBay, Barnes & Noble, Amazon, Metro Traffic etc., as well as client-server software for OmniSky and Palm.
Workspot released AES encryption patches for VNC.
Workspot's domain and name was sold in 2013 to Workspot, Inc.
References
Virtual Network Computing
|
8407820
|
https://en.wikipedia.org/wiki/Jeremy%20Hammond
|
Jeremy Hammond
|
Jeremy Hammond (born January 8, 1985) is an American activist and computer hacker from Chicago. He founded the computer security training website HackThisSite in 2003. He was first imprisoned over the Protest Warrior hack in 2005 and was later convicted of computer fraud in 2013 for hacking the private intelligence firm Stratfor and releasing data to WikiLeaks, and sentenced to 10 years in prison.
In 2019, he was summoned before a Virginia federal grand jury which was investigating WikiLeaks and its founder Julian Assange. He was held in civil contempt of court after refusing to testify.
He was released from prison in November 2020.
Early life
Hammond was raised in the Chicago suburb of Glendale Heights, Illinois, with his twin brother Jason. Hammond became interested in computers at an early age, programming video games in QBasic by age eight, and building databases by age thirteen. As a student at Glenbard East High School in the nearby suburb of Lombard, Hammond won first place in a district-wide science competition for a computer program he designed. Also in high school, he became a peace activist, organizing a student walkout on the day of the Iraq invasion and starting a student newspaper to oppose the Iraq War. His high school principal described Hammond as "old beyond his years".
Hammond attended the University of Illinois at Chicago. In the spring of 2004, during his freshman year, he exploited a security flaw on the computer science department's website and went to department administrators, offering to help fix the security flaws on the site and looking to get a job. For inserting the backdoor, Hammond was called before the department chair and ultimately banned from returning for his sophomore year.
Jeremy, along with his brother Jason, has had a lifelong interest in music, performing in numerous bands through the years. Before Jeremy's arrests, they were both actively performing in the Chicago ska band Dirty Surgeon Insurgency.
Hammond worked as a Mac technician in Villa Park, Illinois. He also worked as a web developer for Chicago-based Rome & Company. His boss at Rome & Company wrote in 2010 that Hammond is "friendly, courteous and polite and while we suspect he has a low tolerance for corporate posturing, he has never demonstrated any contempt for business in the workplace".
Computer hacking and activism
Hammond founded the computer security training website HackThisSite at age 18, during the summer after his high school graduation. The website describes itself as "a non-profit organization that strives to protect a good security culture and learning atmosphere". In its first two years the site received 2.5 million hits and acquired 110,000 members and a volunteer staff of 34.
During the 2004 DEF CON event in Las Vegas, Hammond delivered a talk that encouraged "electronic civil disobedience" as a means of protest against the 2004 Republican National Convention and its supporters.
In February 2005, Hammond, with others, hacked the website of pro-war counterprotesting group Protest Warrior and accessed thousands of credit card numbers, intending to use them to donate to left-wing groups. Although no charges were ever made against the cards, Hammond confessed and was sentenced to two years in federal prison for the crime. Freed after 18 months, Hammond was radicalized by the experience, although the terms of his probation prohibited him from associating with HackThisSite or anarchist groups for another three years.
Stratfor case
On March 5, 2012, Hammond was arrested by Federal Bureau of Investigation (FBI) agents in the Bridgeport neighborhood of Chicago for his involvement in the December 2011 cyberattack on the servers of Stratfor, a private intelligence firm. The intrusion compromised 60,000 credit card numbers, $700,000 in fraudulent charges, and involved the download of 5 million emails, some of which were subsequently published by WikiLeaks. The indictment was unsealed the following day in the Manhattan federal district court. He was one of six individuals from the United States, England and Ireland indicted.
The FBI were led to Hammond through information given by computer hacker Hector Xavier Monsegur ("Sabu"), who became a government informant immediately after his arrest in early 2011, and subsequently pleaded guilty in August 2011 to twelve counts of hacking, fraud, and identity theft. Although Monsegur could have received a sentence of more than 20 years in prison, prosecutors asked that he be sentenced to time served, which was seven months in prison. Information from Monsegur helped lead the authorities to at least eight co-conspirators, including Hammond, and helped to disrupt at least 300 cyberattacks.
The case was prosecuted by the office of Preet Bharara, the United States Attorney for the Southern District of New York. Hammond was represented by Elizabeth Fink.
Sabu was detained pending trial; in denying bail, Judge Loretta A. Preska described Hammond as "a very substantial danger to the community." In February 2013, the defense filed a motion asking presiding Judge Preska to recuse herself from the case on the basis that Preska's husband, Thomas Kavaler, had an email address released in the Stratfor disclosure and worked with Stratfor clients that were affected by the hack. Hammond's legal team argued that this created "appearance of partiality too strong to be disregarded". Preska denied the motion, claiming that the connections were inconsequential or unimportant.
In May 2013, Hammond pleaded guilty to one count of violating the Computer Fraud and Abuse Act (CFAA). Upon his guilty plea, Hammond issued a statement saying, "I did work with Anonymous to hack Stratfor, among other websites" and "I did what I believe is right." He maintained that he had no profit motive for the cyberattack. Hammond has insisted that he would not have carried out the breach of Stratfor's systems without the involvement of Sabu. Hammond was sentenced on November 15, 2013, to the maximum of ten years in prison, followed by three years of supervised release. He described his prosecution and sentence as a "vengeful, spiteful act." On November 17, 2020, Hammond was released from the Memphis Federal Correctional Institution and was transferred to a recovery house to serve the rest of his sentence.
In October 2019, Hammond was summoned before a Virginia federal grand jury which was investigating WikiLeaks and its founder Julian Assange. He was held in civil contempt of court by Judge Anthony Trenga after refusing to testify. Prosecutors granted Hammond immunity from prosecution based on any grand jury testimony, so Hammond could not refuse to testify on the ground of his right against self-incrimination. Like Chelsea Manning (who was also held in contempt for refusing to testify), Hammond said he was ideologically opposed to any grand jury probe which was not being conducted in "good faith" as the government already had the information it needed. In making his contempt ruling, Trenga stated that Hammond's arguments against testifying were "self-serving assertions … without support." Trenga ordered Hammond released in March 2020 after the conclusion of the grand jury, saying that prosecutors no longer required his testimony. Hammond was returned to federal prison to serve the balance of his 10 year sentence. Hammond may have received an early release in December 2019 had the grand jury not intervened.
Personal life
Hammond frequently identifies as an anarchist and has a shoulder tattoo of the anarchy symbol with the words: "Freedom, equality, anarchy." Writing after his arrest, Hammond said, "I have always made it clear that I am an anarchist-communist – as in I believe we need to abolish capitalism and the state in its entirety to realize a free, egalitarian society. I'm not into watering down or selling out the message or making it more marketable for the masses."
See also
Direct action
Hacktivism
References
Further reading
External links
Hammond, Jeremy. DEF CON 2004, Las Vegas. "Electronic Civil Disobedience"
Free Jeremy Hammond
1985 births
Living people
American anarchists
American anti-capitalists
American anti–Iraq War activists
American political activists
Anarcho-communists
Hackers
Hacktivists
Internet activists
People convicted of cybercrime
People from Chicago
People from Manchester, Kentucky
Prisoners and detainees of the United States federal government
|
667349
|
https://en.wikipedia.org/wiki/Dynamic%20systems%20development%20method
|
Dynamic systems development method
|
Dynamic systems development method (DSDM) is an agile project delivery framework, initially used as a software development method. First released in 1994, DSDM originally sought to provide some discipline to the rapid application development (RAD) method. In later versions the DSDM Agile Project Framework was revised and became a generic approach to project management and solution delivery rather than being focused specifically on software development and code creation and could be used for non-IT projects. The DSDM Agile Project Framework covers a wide range of activities across the whole project lifecycle and includes strong foundations and governance, which set it apart from some other Agile methods. The DSDM Agile Project Framework is an iterative and incremental approach that embraces principles of Agile development, including continuous user/customer involvement.
DSDM fixes cost, quality and time at the outset and uses the MoSCoW prioritisation of scope into musts, shoulds, coulds and will not haves to adjust the project deliverable to meet the stated time constraint. DSDM is one of a number of agile methods for developing software and non-IT solutions, and it forms a part of the Agile Alliance.
In 2014, DSDM released the latest version of the method in the 'DSDM Agile Project Framework'. At the same time the new DSDM manual recognised the need to operate alongside other frameworks for service delivery (esp. ITIL) PRINCE2, Managing Successful Programmes, and PMI. The previous version (DSDM 4.2) had only contained guidance on how to use DSDM with extreme programming.
History of DSDM
In the early 1990s, rapid application development (RAD) was spreading across the IT industry. The user interfaces for software applications were moving from the old green screens to the graphical user interfaces that are used today. New application development tools were coming on the market, such as PowerBuilder. These enabled developers to share their proposed solutions much more easily with their customers – prototyping became a reality and the frustrations of the classical, sequential (waterfall) development methods could be put to one side.
However, the RAD movement was very unstructured: there was no commonly agreed definition of a suitable process and many organizations came up with their own definition and approach. Many major corporations were very interested in the possibilities but they were also concerned that they did not lose the level of quality in the end deliverables that free-flow development could give rise to
The DSDM Consortium was founded in 1994 by an association of vendors and experts in the field of software engineering and was created with the objective of "jointly developing and promoting an independent RAD framework" by combining their best practice experiences. The origins were an event organized by the Butler Group in London. People at that meeting all worked for blue-chip organizations such as British Airways, American Express, Oracle, and Logica (other companies such as Data Sciences and Allied Domecq have since been absorbed by other organizations).
In July 2006, DSDM Public Version 4.2 was made available for individuals to view and use; however, anyone reselling DSDM must still be a member of the not-for-profit consortium.
In 2014, the DSDM handbook was made available online and public. Additionally, templates for DSDM can be downloaded.
In October 2016 the DSDM Consortium rebranded as the Agile Business Consortium (ABC). The Agile Business Consortium is a not-for-profit, vendor-independent organisation which owns and administers the DSDM framework.
DSDM
DSDM is a vendor-independent approach that recognises that more projects fail because of people problems than technology. DSDM’s focus is on helping people to work effectively together to achieve the business goals. DSDM is also independent of tools and techniques enabling it to be used in any business and technical environment without tying the business to a particular vendor.
Principles
There are eight principles underpinning DSDM. These principles direct the team in the attitude they must take and the mindset they must adopt to deliver consistently.
Focus on the business need
Deliver on time
Collaborate
Never compromise quality
Build incrementally from firm foundations
Develop iteratively
Communicate continuously and clearly
Demonstrate control
Core techniques
Timeboxing: is the approach for completing the project incrementally by breaking it down into splitting the project in portions, each with a fixed budget and a delivery date. For each portion a number of requirements are prioritised and selected. Because time and budget are fixed, the only remaining variables are the requirements. So if a project is running out of time or money the requirements with the lowest priority are omitted. This does not mean that an unfinished product is delivered, because of the Pareto principle that 80% of the project comes from 20% of the system requirements, so as long as those most important 20% of requirements are implemented into the system, the system therefore meets the business needs and that no system is built perfectly in the first try.
MoSCoW: is a technique for prioritising work items or requirements. It is an acronym that stands for:
Must have
Should have
Could have
Won't have
Prototyping: refers to the creation of prototypes of the system under development at an early stage of the project. It enables the early discovery of shortcomings in the system and allows future users to ‘test-drive’ the system. This way good user involvement is realised, one of the key success factors of DSDM, or any system development project for that matter.
Testing: helps ensure a solution of good quality, DSDM advocates testing throughout each iteration. Since DSDM is a tool and technique independent method, the project team is free to choose its own test management method.
Workshop: brings project stakeholders together to discuss requirements, functionalities and mutual understanding.
Modeling: helps visualise a business domain and improve understanding. Produces a diagrammatic representation of specific aspects of the system or business area that is being developed.
Configuration management: with multiple deliverables under development at the same time and being delivered incrementally at the end of each time-box, the deliverables need to be well managed towards completion.
Roles
There are some roles introduced within DSDM environment. It is important that the project members need to be appointed to different roles before they commence the project. Each role has its own responsibility. The roles are:
Executive sponsor: So called the project champion. An important role from the user organisation who has the ability and responsibility to commit appropriate funds and resources. This role has an ultimate power to make decisions.
Visionary: The one who has the responsibility to initialise the project by ensuring that essential requirements are found early on. Visionary has the most accurate perception of the business objectives of the system and the project. Another task is to supervise and keep the development process in the right track.
Ambassador user: Brings the knowledge of the user community into the project, ensures that the developers receive enough user feedback during the development process.
Advisor user: Can be any user that represents an important viewpoint and brings daily knowledge of the project.
Project manager: Can be anyone from the user community or IT staff who manages the project in general.
Technical co-ordinator: Responsible in designing the system architecture and control the technical quality of the project.
Team leader: Leads their team and ensures that the team works effectively as a whole.
Solution developer: Interpret the system requirements and model it including developing the deliverable codes and build the prototypes.
Solution tester: Checks the correctness in a technical extent by performing some testing, raise defects where necessary and retest once fixed. Tester will have to provide some comment and documentation.
Scribe: Responsible for gathering and recording the requirements, agreements, and decisions made in every workshop.
Facilitator: Responsible for managing the workshops' progress, acts as a motivator for preparation and communication.
Specialist roles: Business architect, quality manager, system integrator, etc.
Critical success factors
Within DSDM a number of factors are identified as being of great importance to ensure successful projects.
Factor 1: First there is the acceptance of DSDM by senior management and other employees. This ensures that the different actors of the project are motivated from the start and remain involved throughout the project.
Factor 2: Directly derived from factor 1: The commitment of the management to ensure end-user involvement. The prototyping approach requires a strong and dedicated involvement by end users to test and judge the functional prototypes.
Factor 3: The project team has to be composed of skillful members that form a stable union. An important issue is the empowerment of the project team. This means that the team (or one or more of its members) has to possess the power and possibility to make important decisions regarding the project without having to write formal proposals to higher management, which can be very time-consuming. In order to enable the project team to run a successful project, they also need the appropriate technology to conduct the project. This means a development environment, project management tools, etc.
Factor 4: Finally, DSDM also states that a supportive relationship between customer and vendor is required. This goes for both projects that are realised internally within companies or by external contractors. An aid in ensuring a supporting relationship could be ISPL.
Comparison to other development frameworks
DSDM can be considered as part of a broad range of iterative and incremental development frameworks, especially those supporting agile and object-oriented methods. These include (but are not limited to) scrum, extreme programming (XP), disciplined agile delivery (DAD), and Rational Unified Process (RUP).
Like DSDM, these share the following characteristics:
They all prioritise requirements and work though them iteratively, building a system or product in increments.
They are tool-independent frameworks. This allows users to fill in the specific steps of the process with their own techniques and software aids of choice.
The variables in the development are not time/resources, but the requirements. This approach ensures the main goals of DSDM, namely to stay within the deadline and the budget.
A strong focus on communication between and the involvement of all the stakeholders in the system. Although this is addressed in other methods, DSDM strongly believes in commitment to the project to ensure a successful outcome.
See also
Lean software development
Agile software development
References
Further reading
Coleman and Verbruggen: A quality software process for rapid application development, Software Quality Journal 7, p. 107-1222 (1998)
Beynon-Davies and Williams: The diffusion of information systems development methods, Journal of Strategic Information Systems 12 p. 29-46 (2003)
Sjaak Brinkkemper, Saeki and Harmsen: Assembly Techniques for Method Engineering, Advanced Information Systems Engineering, Proceedings of CaiSE'98, Springer Verlag (1998)
Abrahamsson, Salo, Ronkainen, Warsta Agile Software Development Methods: Review and Analysis, VTT Publications 478, p. 61-68 (2002)
Tuffs, Stapleton, West, Eason: Inter-operability of DSDM with the Rational Unified Process, DSDM Consortium, Issue 1, p. 1-29 (1999)
Rietmann: DSDM in a bird’s eye view, DSDM Consortium, p. 3-8 (2001)
Chris Barry, Kieran Conboy, Michael Lang, Gregory Wojtkowski and Wita Wojtkowski: Information Systems Development: Challenges in Practice, Theory, and Education, Volume 1
Keith Richards: Agile Project Management: running PRINCE2 projects with DSDM Atern, TSO (2007)
The DSDM Agile Project Framework (2014)
DSDM Agile Project Management Framework (v6, 2014) interactive mind map
External links
The Agile Business Consortium (formerly, DSDM Consortium)
AgilePM wiki
|
34190
|
https://en.wikipedia.org/wiki/XFree86
|
XFree86
|
XFree86 is an implementation of the X Window System. It was originally written for Unix-like operating systems on IBM PC compatibles and was available for many other operating systems and platforms. It is free and open source software under the XFree86 License version 1.1. It was developed by the XFree86 Project, Inc. The lead developer was David Dawes. The last released version was 4.8.0, released December 2008. The last XFree86 CVS commit was made on May 18, 2009; the project was confirmed dormant in December 2011.
For most of the 1990s and early 2000s, the project was the source of most innovation in X and was the de facto steward of X development. Until early 2004, it was almost universal on Linux and the BSDs.
In February 2004, with version 4.4.0, The XFree86 Project began distributing new code with a copyright license that the Free Software Foundation considered GPL incompatible. Most open source operating systems using XFree86 found this unacceptable and moved to a fork from before the license change. The first fork was the abortive Xouvert, but X.Org Server soon became dominant. Most XFree86 developers also moved to X.Org.
Usage
While XFree86 was widely used by most Unix-like computer operating systems before its license change with version 4.4.0, it has since then been superseded by X.org and is used rarely nowadays. The last remaining operating system distribution to use it was NetBSD, which shipped some platforms with 4.5.0 by default until removing it as obsolete in 2015. and later releases use X.org by default on various ports (including i386 and amd64), and X.org is available through NetBSD pkgsrc for architectures for which XFree86 remains the default because of better support.
, the netbsd-7 branch and release were the last to potentially contain XFree86, and XFree86 was completely removed before netbsd-8 branch and release in 2018.
Architecture
The XFree86 server communicates with the host operating system's kernel to drive input and output devices, with the exception of graphics cards. These are generally managed directly by XFree86, so it includes its own drivers for all graphic cards a user might have. Some cards are supported by vendors themselves via binary-only drivers.
Since version 4.0, XFree86 has supported certain accelerated 3D graphics cards via the GLX and DRI extensions. Also in the version 4.0, XFree86 moved to a new driver model, from one X server binary per driver to a unique X server capable of loading several drivers at a time.
Because the server usually needs low level access to graphics hardware, on many configurations it needs to run as the superuser, or a user with UID 0. However, on some systems and configurations it is possible to run the server as a normal user.
It is also possible to use XFree86 in a framebuffer device, which in turn uses a kernel graphics card driver.
On a typical POSIX-system, the directory /etc/X11 includes the configuration files. The basic configuration file is /etc/X11/XF86Config (or XF86Config-4) that includes variables about the screen (monitor), keyboard and graphics card. The program xf86config is often used, although xf86cfg also comes with the XFree86 server and is certainly friendlier. Many Linux distributions used to include a configuration tool that was easier to use (such as Debian's debconf) or autodetected most (if not all) settings (Red Hat Linux and Fedora's Anaconda, SuSE's YaST and Mandrake Linux used to choose this path).
History
Early history and naming
The project began in 1992 when David Wexelblat, Glenn Lai, David Dawes and Jim Tsillas joined forces addressing bugs in the source code of the X386 X display server (written by Thomas Roell), as contributed to X11R5. This version was initially called X386 1.2E. As newer versions of the (originally freeware) X386 were being sold under a proprietary software license by SGCS (of which Roell was a partner), confusion existed between the projects. After discussion, the project was renamed XFree86, as a pun (compare X-three-eighty-six to X-free-eighty-six). Roell has continued to sell proprietary X servers, most recently under the name Accelerated-X.
Rise with Linux
As Linux grew in popularity, XFree86 rose with it, as the main X project with drivers for PC video cards.
By the late 1990s, official X development was moribund. Most technical advancement was happening in the XFree86 project. In 1999, XFree86 was sponsored onto X.Org (the official industry consortium) by various hardware companies interested in its use with Linux and its status as the most popular version of X.
2002: Growing dissent within the project
By 2002, while Linux's popularity, and hence the installed base of X, surged, X.Org was all but inactive; active development was largely carried out by XFree86. However, there was considerable dissent within XFree86.
XFree86 used to have a Core Team which was made up of experienced developers, selected by other Core Team members for their merits. Only the members of this Core Team were allowed to commit to CVS. This was perceived as far too cathedral-like in its development model: developers were unable to get commit rights quickly and vendors ended up maintaining extensive patches.
A key event was Keith Packard losing his commit rights. Hours before the feature freeze window for XFree86 4.3.0 started, he committed the XFIXES extension (which he developed himself), without prior discussion or without review within the Core Team. The Core Team decided to remove Keith's commit access, but without removing him from the Core Team itself, and the XFIXES extension was backed out six weeks later.
2003: The fork and the disbanding of the Core Team
In March 2003, the Core Team claimed that Packard had been trying to fork the XFree86 project by working inside the project while trying to attract core developers to a new X Server project of his own making. Packard denied this had been his aim, but some emails were provided as evidence otherwise. Keith Packard was subsequently expelled from the Core Team.
A short time later, Packard created xwin.org, which mainly served as a meeting point for cultivating the XFree86 fork. The rest of the year, many of the developers that were still active at XFree86 went over to the project that was being set up at the freedesktop.org and X.org domains.
By the end of the year, due to dwindling active membership and limited remaining development capacity, the XFree86 Core Team voted to disband itself.
2004: Licensing controversy
Versions of XFree86 up to and including some release candidates for 4.4.0 were under the MIT License, a permissive, non-copyleft free software license. In February 2004, XFree86 4.4 was released with a change to the XFree86 license, by adding a credit clause, similar to that in the original BSD license, but broader in scope. The newer terms are referred to as the XFree86 License 1.1.
Many projects relying on XFree86 found the new license unacceptable, and the Free Software Foundation considers it incompatible with the version 2 of the GNU General Public License, though compatible with version 3. The XFree86 Project states that the license is "as GPL compatible as any and all previous versions were", but does not mention which version or versions of the GPL this is valid for.
Some projects made releases (notably OpenBSD 3.5 and 3.6, and Debian 3.1 "Sarge") based on XFree86 version 4.4 RC2, the last version under the old license. Most operating systems incorporating XFree86 (including later versions of OpenBSD and Debian) migrated to the X.Org Server.
The last code commit was in 2009; the project was confirmed dormant in 2011 and the website was last updated in 2014, commemorating the-then 22th anniversary.
Forks of XFree86
Xwin
Shortly after he was expelled from the XFree86 Core Team, Keith Packard started setting up xwin.org. While this was claimed to be the fork of XFree86, Keith Packard later refined this to "a forum for community participation in X". Xwin saw a lot of activity in the first two months after the announcements, but most of the activity was happening behind the scenes, and Keith moved his own development to freedesktop.org.
Xouvert
Xouvert was later also hailed as the first XFree86 fork in August 2003. Even though releases were announced for October 2003 and April 2004, no releases were made. The last status change was made in March 2004 and it was communicated that there were delays in setting up a revision control system.
X.Org
The X.Org Server became the official reference implementation of X11. The first version, X11R6.7.0, was forked from XFree86 version 4.4 RC2 to avoid the XFree86 license changes, with X11R6.6 changes merged in. Version X11R6.8 added many new extensions, drivers and fixes. It is hosted by and works closely with corporate-sponsored freedesktop.org.
Most of the open-source Unix-like operating systems have adopted the X.Org Server in place of XFree86, and most of the XFree86 developers have moved to X.Org.
Release history
See also
DirectFB
XFree86 logfile
XFree86 Modeline
XF86Config
References
Notes
Announcing the release of XFree86 1.1
Announcing the release of XFree86 1.2
Announcing the release of XFree86 1.3
xfree86/CHANGELOG.R5?rev=1.1.1.1
X Marks the Spot: Looking back at X11 Developments of Past Year (Oscar Boykin, OSNews February 25, 2004) — the licensing controversy and forks
The History of XFree86: Over a Decade of Development (Michael J. Hammel, Linux Magazine, December 2001)
Some perspective from the cheap seats ... (David Wexelblat, March 20, 2003) — on why Keith Packard was sacked from the core team
A Call For Open Governance Of X Development (Keith Packard, March 21, 2003)
XFree86 dust-up questions X11 model (Andrew Orlowski, The Register, March 21, 2003)
External links
Project home page
Free windowing systems
X servers
|
38190183
|
https://en.wikipedia.org/wiki/2013%20USC%20Trojans%20football%20team
|
2013 USC Trojans football team
|
The 2013 USC Trojans football team represented the University of Southern California in the 2013 NCAA Division I FBS college football season. They played their home games at Los Angeles Memorial Coliseum, and were members of the South Division of the Pac-12 Conference. They finished the season 10–4, 6–3 in Pac-12 play to finish in a tie for second place in the South Division. They were invited to the Las Vegas Bowl where they defeated Fresno State.
Head coach Lane Kiffin, who was in his fourth year, was fired on September 29 after a 3–2 start to the season. He was replaced by interim head coach Ed Orgeron. At the end of the regular season, Washington head coach Steve Sarkisian was hired as the new head coach beginning in 2014. This prompted Orgeron to resign before the bowl game. Clay Helton led the Trojans in the Las Vegas Bowl.
Personnel
Coaching staff
Lane Kiffin started the season as the Trojans' head coach, but was fired on September 29 after a 3–2 start. Ed Orgeron became the interim head coach, and went 6–2. He resigned on December 3 after it was announced that Steve Sarkisian was hired to be the permanent head coach.
Depth chart
Recruiting class
Schedule
Game summaries
Hawaii
Sources:
Washington State
1st quarter scoring: None
2nd quarter scoring: USC – Cody Kessler 4-yard run (Andre Heidari kick); WSU – Damante Horton 70-yard interception return (Andrew Furney kick)
3rd quarter scoring: None
4th quarter scoring: WSU – Furney 41-yard field goal
Boston College
Utah State
Arizona State
Head Coach Lane Kiffin was fired after this game upon returning to Los Angeles with the team on September 29, 2013
Arizona
Interim head coach Ed Orgeron takes over the program for USC.
Notre Dame
1st quarter scoring: USC – Silas Redd 1-yard run (Andre Heidari kick); ND – Troy Niklas 7-yard pass from Tommy Rees (Kyle Brindza kick)
2nd quarter scoring: USC – Heidari 22-yard field goal; ND – TJ Jones 11-yard pass from Rees (Brindza kick)
Utah
1st quarter scoring: UTAH – Andy Phillips 42-yard field goal; USC – Nelson Agholor 30-yard pass from Cody Kessler (Andre Heidari kick)
2nd quarter scoring: USC – Heidari 35-yard field goal; USC – Heidari 38-yard field goal; USC – Heidari 28-yard field goal
3rd quarter scoring: USC – Heidari 40-yard field goal
4th quarter scoring: None
Oregon State
California
1st quarter scoring: USC – Nelson Agholor 75-yard punt return (Andre Heidari kick); USC – Silas Redd 12-yard pass from Cody Kessler (Heidari kick); USC – Javorius Allen 43-yard run (Heidari kick)
2nd quarter scoring: CAL – Kenny Lawler 4-yard pass from Jared Goff (Vincen D'Amato kick); CAL – Darius Powe 24-yard pass from Goff (D'Amato kick); USC – Allen 57-yard pass from Kessler (Heidari kick); USC – Josh Shaw 14-yard punt return (Heidari kick); USC – Agholor 93-yard punt return (kick missed)
3rd quarter scoring: USC – Allen 79-yard run (Heidari kick); USC – Ty Isaac 4-yard run (Heidari kick); CAL – Khalfani Muhammad 7-yard run (D'Amato kick)
4th quarter scoring: USC – Isaac 37-yard run (Heidari kick); CAL – Lawler 4-yard pass from Goff (D'Amato kick)
Stanford
1st quarter scoring: USC – Soma Vainuku 1-yard pass from Cody Kessler (Andre Heidari kick failed); STAN – T. Gaffney 35-yard run (C. Ukropina kick); USC – Javorius Allen 1-yard run (Marqise Lee pass from Kessler)
2nd quarter scoring: USC – Heidari 23-yard field goal; STAN – Ukropina 27-yard field goal
3rd quarter scoring: STAN – Gaffney 18-yard run (Ukropina kick)
4th quarter scoring: USC – Heidari 47-yard field goal
Colorado
UCLA
Last season, the Bruins defeated the Trojans 38–28 in the Rose Bowl.
1st quarter scoring: UCLA – Myles Jack 3-yard run (Ka'imi Fairbairn kick)
2nd quarter scoring: UCLA – Eddie Vanderdoes 1-yard run (Fairbairn kick); USC – Javorius Allen 11-yard run (Andre Heidari kick)
3rd quarter scoring: UCLA – Brett Hundley 12-yard run (Fairbairn kick); USC – Xavier Grimble 22-yard pass from Cody Kessler (Heidari kick); UCLA – Hundley 5-yard run (Fairbairn kick)
4th quarter scoring: UCLA – Paul Perkins 8-yard run (Fairbairn kick)
Fresno State (Las Vegas Bowl)
Tracy Jones of the American Athletic Conference is the referee.
1st quarter scoring: USC – Marqise Lee 10-yard pass from Cody Kessler (Andre Heidari kick); FS – Isaiah Burse 8-yard pass from Derek Carr (Colin McGuire kick blocked); USC – Nelson Agholor 40-yard pass from Kessler (Heidari kick)
2nd quarter scoring: USC – Agholor 17-yard pass from Kessler (Heidari kick); USC – Javorius Allen 24-yard run (Heidari kick); USC – Lee 40-yard pass from Kessler (Heidari kick)
3rd quarter scoring: FS – Davante Adams 23-yard pass from Carr (McGuire kick); USC – Heidari 39-yard field goal
4th quarter scoring: FS – Derron Smith 41-yard interception return (McGuire kick) ; USC – Allen 1-yard run (Heidari kick)
Rankings
Statistics
Scores by quarter (Pac-12 opponents)
Notes
December 21, 2013 – After winning the Las Vegas Bowl game, USC announced that Clay Helton will return next season.
References
USC
USC Trojans football seasons
Las Vegas Bowl champion seasons
USC Trojans football
|
571886
|
https://en.wikipedia.org/wiki/Ferranti%20Mark%201
|
Ferranti Mark 1
|
The Ferranti Mark 1, also known as the Manchester Electronic Computer in its sales literature, and thus sometimes called the Manchester Ferranti, was produced by British electrical engineering firm Ferranti Ltd. It was the world's first commercially available general-purpose digital computer. It was "the tidied up and commercialised version of the Manchester Mark I". The first machine was delivered to the Victoria University of Manchester in February 1951 (publicly demonstrated in July) ahead of the UNIVAC I, which was sold to the United States Census Bureau on 31 March 1951, although not delivered until late December the following year.
History and specifications
Based on the Manchester Mark 1, which was designed at the University of Manchester by Freddie Williams and Tom Kilburn, the machine was built by Ferranti of the United Kingdom. The main improvements over it were in the size of the primary and secondary storage, a faster multiplier, and additional instructions.
The Mark 1 used a 20-bit word stored as a single line of dots of electric charges settled on the surface of a Williams tube display, each cathodic tube storing 64 lines of dots. Instructions were stored in a single word, while numbers were stored in two words. The main memory consisted of eight tubes, each storing one such page of 64 words. Other tubes stored the single 80-bit accumulator (A), the 40-bit "multiplicand/quotient register" (MQ) and eight "B-lines", or index registers, which was one of the unique features of the Mark 1 design. The accumulator could also be addressed as two 40-bit words. An extra 20-bit word per tube stored an offset value into the secondary storage. Secondary storage was provided in the form of a 512-page magnetic drum, storing two pages per track, with about 30 milliseconds revolution time. The drum provided eight times the storage of the original designed at Manchester.
The instructions, like the Manchester machine, used a single address format in which operands were modified and left in the accumulator. There were about fifty instructions in total. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in the new parallel unit in about 2.16 milliseconds (about 5 times faster than the original). The multiplier used almost a quarter of the machine's 4,050 vacuum tubes. Several instructions were included to copy a word of memory from one of the Williams tubes to a paper tape machine, or read them back in. Several new instructions were added to the original Manchester design, including a random number instruction and several new instructions using the B-lines.
The original Mark 1 had to be programmed by entering alphanumeric characters representing a five-bit value that could be represented on the paper tape input. The engineers decided to use the simplest mapping between the paper holes and the binary digits they represented, but the mapping between the holes and the physical keyboard was never meant to be a binary mapping. As a result, the characters representing the values from 0–31 (five-bit numbers) looked entirely random, specifically /E@A:SIU½DRJNFCKTZLWHYPQOBG"MXV£.
The first machine was delivered to the University of Manchester. Ferranti had high hopes for further sales, and were encouraged by an order placed by the Atomic Energy Research Establishment for delivery in autumn 1952. However, a change of government while the second machine was being built led to all government contracts over £100,000 being cancelled, leaving Ferranti with a partially completed Mark 1. The company ultimately sold it to the University of Toronto, who had been building their own machine, but saw the chance to buy the complete Mark 1 for even less. They purchased it for around $30,000, a "fire sale" price, and gave it the nickname FERUT. FERUT was extensively used in business, engineering, and academia, among other duties, carrying out calculations as part of the construction of the St. Lawrence Seaway.
Mark 1 Star
After the first two machines, a revised version of the design became available, known as the Ferranti Mark 1 Star or the Ferranti Mark 1*. The revisions mainly cleaned up the instruction set for better usability. Instead of the original mapping from holes to binary digits that resulted in the random-looking mapping, the new machines mapped digits to holes to produce a much simpler mapping, ø£½0@:$ABCDEFGHIJKLMNPQRSTUVWXYZ. Additionally, several commands that used the index registers had side effects that led to quirky programming, but these were modified to have no side effects. Similarly, the original machines' JUMP instructions landed at a location "one before" the actual address, for reasons similar to the odd index behaviour, but these proved useful only in theory and quite annoying in practice, and were similarly modified. Input/output was also modified, with five-bit numbers being output least significant digit to the right, as is typical for most numeric writing. These, among other changes, greatly improved the ease of programming the newer machines.
The Mark 1/1* weighted .
At least seven of the Mark 1* machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam. Another was installed at Avro, the aircraft manufacturers, at their Chadderton factory in Manchester. This was used for work on the Vulcan among other projects.
Conway Berners-Lee and Mary Lee Woods, the parents of Tim Berners-Lee, inventor of the World Wide Web, both worked on the Ferranti Mark 1 and Mark 1*.
Computer music
Included in the Ferranti Mark 1's instruction set was a hoot command, which enabled the machine to give auditory feedback to its operators. The sound generated could be altered in pitch, a feature which was exploited when the Mark 1 made the earliest known recording of computer-generated music, playing a medley which included "God Save the King", "Baa Baa Black Sheep", and "In the Mood". The recording was made by the BBC towards the end of 1951, with the programming being done by Christopher Strachey, a mathematics teacher at Harrow and a friend of Alan Turing. It was not, however, the first computer to have played music; CSIRAC, Australia's first digital computer, achieved that with a rendition of "Colonel Bogey".
Computer games
In November 1951, Dr. Dietrich Prinz wrote one of the earliest computer games, a chess-playing program for the Manchester Ferranti Mark 1 computer. The limitation of the Mark 1 computer did not allow for a whole game of chess to be programmed. Prinz could only program mate-in-two chess problems. The program examined every possible move for White and Black (thousands of possible moves) until a solution was found, which took 15–20 minutes on average. The program's restrictions were: no castling, no double pawn move, no en passant capture, no pawn promotion, and no distinction between checkmate and stalemate.
See also
History of computing hardware
List of vacuum-tube computers
Manchester computers
References
Notes
Citations
Bibliography
Further reading
External links
Ferranti Mark 1 at Computer50
A simulator of the Ferranti Mark 1, executing Christopher Strachey's Love letter algorithm from 1952
The Ferranti Mark 1* that went to Shell labs in Amsterdam, Netherlands (Dutch only), Google translation
Contains photo of the console
Early British computers
Ferranti
Ferranti computers
History of Manchester
History of science and technology in England
Department of Computer Science, University of Manchester
Vacuum tube computers
|
28788551
|
https://en.wikipedia.org/wiki/John%20Rudometkin
|
John Rudometkin
|
John Rudometkin (June 6, 1940 – August 4, 2015) was an American professional basketball player, formerly of the New York Knicks and San Francisco Warriors in the National Basketball Association (NBA). He was selected in the second round as the 11th pick in the 1962 NBA draft by the Knicks and spent three seasons playing in the league. Rudometkin was nicknamed "the Reckless Russian" by Chick Hearn, the Los Angeles Lakers broadcaster who used to broadcast USC men's basketball games before transitioning to the NBA.
College
Before attending the University of Southern California, Rudometkin spent one year playing basketball at Allan Hancock College, a junior college located in his hometown of Santa Maria, California. He averaged 18.2 points per game (ppg) in 30 games during the 1958–59 season.
Rudometkin then enrolled at USC in the fall of 1959 to play for the Trojans. As a center, he went on to have a highly successful career in college. In his three varsity seasons at the NCAA Division I institution, Rudometkin held career averages of 18.8 points and 10.5 rebounds in 79 games played. He scored 1,434 points, which stood as the school record for 23 years, and his 18.8 average is still the best career average at USC. In 1961, he led the Trojans to an outright conference title, which through 2009–10 remains their most recent outright conference championship. In all three seasons Rudometkin led the team in scoring and was named the team MVP, and as a senior in 1961–62 he was named a consensus second-team All-American.
Professional
After his college career ended, Rudometkin was selected in the second round as the 11th overall pick by the New York Knicks in the 1962 NBA draft. He spent the , , and part of the seasons playing for the Knicks until he was signed as a free agent on February 2, 1965, by the San Francisco Warriors, with whom he subsequently finished the season (and his career). Although Rudometkin played the center position in college, he was moved to play forward in the NBA. In three professional seasons, Rudometkin averaged 6.3 points, 3.1 rebounds and 0.5 assists per game.
Personal
After only three seasons, Rudometkin was forced to prematurely retire from basketball. His stamina weakened noticeably and doctors could not initially determine the cause. He was diagnosed with non-Hodgkin's lymphoma, a diverse group of blood cancers that include any kind of lymphoma except Hodgkin's lymphoma. He spent years in treatment, which caused total hair loss, temporary paralysis and the need to learn to walk all over again. Rudometkin eventually went into remission and cited both medicine and his faith as reasons why he was able to survive the tumor which had encircled his lungs and heart.
After his ordeal, Rudometkin married, had three sons, wrote a book about his experiences and traveled the country as a motivational speaker. He also spent time as a real estate investor and minister. Towards the end of his life, he resided in Newcastle, California with his wife of roughly 50 years, and required an oxygen tank to help him breathe. Rudometkin died on August 4, 2015 from chronic lung disease.
References
1940 births
2015 deaths
All-American college men's basketball players
Allan Hancock Bulldogs men's basketball players
American men's basketball players
American people of Russian descent
Basketball players from California
New York Knicks draft picks
New York Knicks players
Parade High School All-Americans (boys' basketball)
People from Newcastle, California
San Francisco Warriors players
Small forwards
Sportspeople from Santa Maria, California
USC Trojans men's basketball players
|
8475183
|
https://en.wikipedia.org/wiki/UDP-Lite
|
UDP-Lite
|
UDP-Lite (Lightweight User Datagram Protocol) is a connectionless protocol that allows a potentially damaged data payload to be delivered to an application rather than being discarded by the receiving station. This is useful as it allows decisions about the integrity of the data to be made in the application layer (application or the codec), where the significance of the bits is understood. UDP-Lite is described in .
Protocol
UDP-Lite is based on User Datagram Protocol (UDP), but unlike UDP, where either all or none of a packet is protected by a checksum, UDP-Lite allows for partial checksums that only covers part of a datagram (an arbitrary count of octets at the beginning of the packet), and will therefore deliver packets that have been partially corrupted. It is designed for multimedia protocols, such as Voice over IP (VoIP) or streamed video, in which receiving a packet with a damaged payload is better than receiving no packet at all. For conventional UDP and Transmission Control Protocol (TCP), a single bit in error will cause a "bad" checksum, meaning that the whole packet must be discarded: in this way, bit errors are "promoted" to entire packet errors even where the damage to the data is trivial. For computing the checksum UDP-Lite uses the same checksum algorithm used for UDP (and TCP).
Modern multimedia codecs, like G.718 and Adaptive Multi-Rate (AMR) for audio and H.264 and MPEG-4 for video, have resilience features already built into the syntax and structure of the stream. This allows the codec to (a) detect errors in the stream and (b) potentially correct, or at least conceal, the error during playback. These codecs are ideal partners for UDP-Lite, since they are designed to work with a damaged data stream, and it is better for these codecs to receive perhaps 200 bytes where a few bits are damaged rather than have to conceal the loss of an entire packet that was discarded due to a bad checksum. The application layer understands the significance of the data, where the transport only sees UDP packets. This means that error protection can be added if necessary at a higher layer, for example with a forward error correction scheme. The application is the best place to decide which parts of the stream are most sensitive to error and protect them accordingly, rather than have a single "brute force" checksum that covers everything equally. An example of this can be seen in research by Hammer et al. where UDP-Lite is coupled with the AMR codec to give improved speech quality in lossy network conditions.
Since most modern link layers protect the carried data with a strong cyclic redundancy check (CRC) and will discard damaged frames, making effective use of UDP Lite requires the link layer to be aware of the network layer data being carried. Since no current IP stacks implement such cross-layer interactions, making effective use of UDP-Lite currently requires specially modified device drivers.
The IP protocol identifier is 136. UDP-Lite uses the same set of port numbers assigned by the Internet Assigned Numbers Authority (IANA) for use by UDP.
Support for UDP-Lite was added in the Linux kernel version 2.6.20.
Support for UDP-Lite was added in the FreeBSD kernel from r264212. The changeset was also MFC'ed back to stable/10 and became available in FreeBSD 10.1-RELEASE.
The BSD socket API is extended to support UDP-Lite by the third parameter of the system call: Set it to to request a UDP-Lite socket:
int fd = socket(PF_INET, SOCK_DGRAM, IPPROTO_UDPLITE);
One can also easily set what part of the packet will be covered by the checksum (starting from the beginning including header):
int val = 20; /* 8 octets of header + 12 octets of the application protocol. */
(void)setsockopt(fd, SOL_UDPLITE, UDPLITE_SEND_CSCOV, &val, sizeof val);
If a packet smaller than 12 octets is sent in such a setup, the checksum will cover the whole packet.
On the receiving side a socket will by default drop all packets which are not covered completely (UDP emulation.) To permit for smaller coverage one can use:
int val = 20; /* 8 octets of header + 12 octets of the application protocol. */
(void)setsockopt(fd, SOL_UDPLITE, UDPLITE_RECV_CSCOV, &val, sizeof val);
This will allow for packets where at minimum 12 octets of user data are checksummed. Any packet with a smaller coverage will be silently dropped as bad. If a packet has a coverage length of at least 20 octets (including header) and its checksum is correct, it will be delivered to application (whole or part of the payload can still be corrupted, because it could be not covered by checksum or because the checksum was correct incidentally, but the latter is very unlikely.) If the checksum is incorrect the packet will be dropped, because it is actually impossible to know if the error was inside the payload data or in the UDP-Lite header, so the packet could actually be destined for a different program.
The smallest possible coverage is 8 octets. Headers need to be included in checksum. Packets with a smaller length of coverage will always be dropped independent of any settings (ignoring sniffers which are interested in all traffic) as not conforming to standard.
Support
UDP-Lite is supported by the following operating systems:
FreeBSD, since version 10.1-RELEASE
Linux, since kernel version 2.6.20
Also available on Windows through a third-party library, WULL
References
External links
— The Lightweight User Datagram Protocol (UDP-Lite)
— MIB for the UDP-Lite protocol
— RObust Header Compression (ROHC): Profiles for User Datagram Protocol (UDP) Lite
— Unicast UDP Usage Guidelines for Application Designers
Transport layer protocols
|
4467334
|
https://en.wikipedia.org/wiki/Piranha%20Interactive%20Publishing
|
Piranha Interactive Publishing
|
Piranha Interactive Publishing, Inc. was an Arizona, United States software publishing firm founded in 1995 with seven principles. The business plan was to secure and publish third-party developer software, thus avoiding the high risk and cost of in-house development, and therefore passing on higher royalties to the licensed developers.
The corporation secured several retail titles in both educational and entertainment genres in the first few years of operation. Early successes led to a successful multimillion-dollar initial public offering (IPO) on NASDAQ in late 1997.
Closing
The company continued operations until cash-flow problems forced it out of business in July 1999. Piranha Interactive Publishing did not file for Bankruptcy; rather, they closed their doors and liquidated their assets. The following press release was sent out and posted on the company's web site:
Piranha Interactive Publishing, Inc. Terminates Operations'For Immediate Release(July 9, 1999) Tempe, Arizona - Piranha Interactive Publishing, Inc.
Piranha Interactive Publishing, Inc. announced today that it has ceased operations effective as of the close of business on Friday, July 2, 1999. The Company has been unable to generate sufficient cashflow from its operations to meet its expenses and has been unable to secure additional working capital. The Company's securities were delisted from the Nasdaq SmallCap market as of the close of business on June 23. The lack of an active market for Company's securities drastically limited the Company's ability to secure additional capital. The Company is in the process of liquidating all remaining assets and the proceeds will be distributed to creditors. The Company's liabilities are far in excess of its assets. The final distribution to creditors is expected to be de minimis.
Software
Piranha Interactive's most successful published software title was RedShift 3, an interactive astronomy program that was the winner of the 1999 Codie "Excellence In Software" Award for "Best New Home Education For Teenagers and Adults."
Some other Piranha titles included:
Air Blocks (IBM PC)
Ancient Origins (IBM PC) (Macintosh)
Dead Reckoning (IBM PC)
Extreme Tactics (IBM PC)
Majestic (Hybrid Windows/Macintosh)
Morpheus (Hybrid Windows/Macintosh)
Planetary Missions (IBM PC) (Macintosh)
Preschool Mother Goose (Hybrid Windows/Macintosh)
Revenge of the Toys (IBM PC)
Skybase (IBM PC) (Macintosh)
Syn-Factor (Hybrid Windows/Macintosh)
The company also published a number of compilations (or "Piranha Packs") of educational, productivity and entertainment titles.
Affiliations
There is some confusion on the Internet as to which companies were affiliated with Piranha Interactive.
For example, one site erroneously indicates that Piranha Interactive was an American label of MacMillan Software, Ltd., a British firm. While Piranha Interactive licensed several of its titles to foreign publishers for distribution in other countries, the two companies were distinct and never collaborated on software publishing efforts. However, MacMillan Software did have an in-house Piranha'' label for some of their titles, giving rise to the confusion.
At least two firms appear to have been using some variation of the Piranha name and/or a variation of the logo: Piranha Bytes Software GmbH (founded in 1997) and Piranha Games.
Another source of confusion is the suggestion that the company was reconstituted in 2000 as Tiburon Interactive Publishing, Inc. However, Piranha completely closed operations in 1999 and is no longer extant. Shortly thereafter, two of the former Piranha employees bought the licenses to some of the software titles and continue to publish them under a different company, Tiburon Interactive.
References
Piranha Interactive Publishing Official Site (archived) at Internet Archive
Publishing companies established in 1995
Video game companies of the United States
Companies based in Tempe, Arizona
|
4692170
|
https://en.wikipedia.org/wiki/Narus%20%28company%29
|
Narus (company)
|
Narus Inc. was a software company and vendor of big data analytics for cybersecurity.
History
In 1997, Ori Cohen, Vice President of Business and Technology Development for VDONet, founded Narus with Stas Khirman in Israel. Presently, they are employed with Deutsche Telekom AG and are not members of Narus' Executive Team. In 2010, Narus became a subsidiary of Boeing, located in Sunnyvale, California. In 2015, Narus was sold to Symantec.
Management
In 2004, Narus employed former Deputy Director of the National Security Agency, William Crowell as a director. From the Press Release announcing this:
Narus Software
Narus software primarily captures various computer network traffic in real time and analyzes results.
Prior to 9/11 Narus built carrier-grade tools to analyze IP network traffic for billing purposes, to prevent what Narus called "revenue leakage". Post-9/11 Narus added more "semantic monitoring abilities" for surveillance.
Mobile
Narus provided Telecom Egypt with deep packet inspection equipment, a content-filtering technology that allows network managers to inspect, track and target content from users of the Internet and mobile phones, as it passes through routers. The national telecommunications authorities of both Pakistan and Saudi Arabia are global Narus customers.
Controversies
AT&T wiretapping room
Narus supplied the software and hardware used at AT&T wiretapping rooms, according to whistleblowers Thomas Drake, and Mark Klein.
See also
Carnivore (software)
Communications Assistance For Law Enforcement Act
Computer surveillance
ECHELON
Hepting vs. AT&T, the 2006 lawsuit in which the Electronic Frontier Foundation alleges AT&T allowed the NSA to tap the entirety of its clients' Internet and voice over IP communications using Narus equipment.
Lincoln (surveillance)
Room 641A
SIGINT
Total Information Awareness
Verint Systems
References
External links
Wired News article
Wired News article (AT&T whistleblower Mark Klein discusses Narus STA 6400)
Frontline Flash Video "Spying on the Home Front" TV documentary originally aired on PBS 15 May 2007 with a section entitled "The NSA's Eavesdropping at AT&T" with the story of Mark Klein exposing NSA wiretapping with a secure room and Narus STA 6400 at an AT&T facility in San Francisco, CA
Software companies established in 1997
Software companies based in the San Francisco Bay Area
Companies based in Sunnyvale, California
Defense companies of the United States
Computer security software companies
2010 mergers and acquisitions
2015 mergers and acquisitions
Boeing mergers and acquisitions
NortonLifeLock acquisitions
Software companies of the United States
|
18162919
|
https://en.wikipedia.org/wiki/1st%20Airborne%20Command%20Control%20Squadron
|
1st Airborne Command Control Squadron
|
The 1st Airborne Command Control Squadron is part of the 595th Command and Control Group at Offutt Air Force Base, Nebraska. It operates the Boeing E-4 aircraft conducting airborne command and control missions.
The squadron is one of the oldest in the United States Air Force, its origins dating to 25 September 1917, when it was organized at Fort Omaha, Nebraska. It served overseas in France as part of the American Expeditionary Forces during World War I. The squadron saw combat during World War II, and became part of the Strategic Air Command during the Cold War.
History
World War and Balloon School
The first predecessor of the squadron was organized at Fort Omaha Nebraska in September 1917 as Company A, 2d Balloon Squadron. Two months later it departed for overseas service on the Western Front (World War I), arriving in France in January 1918. It entered combat as an observation unit with the French Eighth Army on 19 April 1918, operating observation balloons over the front lines. Once forces of the American Expeditionary Forces, had built up, it continued to operate as the 1st Balloon Company with the American I Corps until 17 October 1918. Following the end of the war, it served with III Corps as part of the occupation forces until April 1919.
Interwar years
In the spring of 1919, the squadron returned to the United States and was stationed at Ross Field, California as part of the Air Service Balloon School. In June 1922, the Balloon School moved to Scott Field, Illinois and Ross Field was closed as a military installation. The squadron was inactivated with the closure of Ross.
The second predecessor of the squadron, also designated the 1st Balloon Company, was activated at Scott in May 1929. After a brief period of training with the 21st Airship Group at Scott, it moved to Post Field, located on Fort Sill, Oklahoma, where it was assigned to the Field Artillery School. It trained and conducted exercises with the school. At the beginning of World War II, it operated barrage balloons, but that mission was assigned to the coast artillery and the squadron was disbanded two months after the Japanese attack on Pearl Harbor.
World War II
The third predecessor of the squadron was activated in April 1942 at Long Beach Army Air Base as the 1st Air Corps Ferrying Squadron, the location of a Douglas Aircraft Company manufacturing plant. It ferried aircraft from the Douglas factory and other factories in the Western Procurement District to overseas departure points. However, the Army Air Forces was finding that standard military units, based on relatively inflexible tables of organization were not well adapted to the training and logistics support mission. Accordingly it adopted a more functional system in which each base was organized into a separate numbered unit.
In March 1944, Air Transport Command units assigned to the 6th Ferrying Group were combined into the 556th AAF Base Unit.
Airborne command and control
On 1 June 1962, Headquarters Command organized the 1000th Airborne Command Control Squadron at Andrews Air Force Base to operate the National Emergency Airborne Command Post and assigned it to the 1001st Air Base Wing. By 1965, the squadron was operating Boeing EC-135 aircraft to support this mission. On 1 July 1969, the 1st Airborne Command Control Squadron was activated and assumed the mission, personnel and equipment of the 1000th Squadron.
In 1974, the squadron began to replace its EC-135s with more capable Boeing E-4s, completing the upgrade the following year. In November 1975, the squadron was reassigned from Andrews' 1st Composite Wing to the 55th Strategic Reconnaissance Wing at Offutt Air Force Base, Nebraska. On 1 July 1977, it moved to joined the 55th Wing at Offutt On 1 October 2016, the unit was reassigned to the newly activated 595th Command and Control Group under the control of Air Force Global Strike Command.
Lineage
1st Airship Company
Organized as Company A, 2d Balloon Squadron on 25 September 1917
Redesignated 1st Balloon Company on 19 June 1918
Inactivated on 25 July 1922
Redesignated 1st Airship Company on 24 March 1923
Consolidated with the 1st Balloon Company as the 1st Balloon Company on 31 July 1929
1st Balloon Squadron
Constituted as the 1st Balloon Company on 18 October 1927
Activated on 17 May 1929
Consolidated with the 1st Airship Company on 31 July 1929
Redesignated 1st Balloon Squadron on 1 October 1933
Disbanded on 6 Feb 1942
Reconstituted and consolidated with the 1st Ferrying Squadron and the 1st Airborne Command Control Squadron as the 1st Airborne Command Control Squadron on 19 September 1985
1st Ferrying Squadron
Constituted as the 1st Air Corps Ferrying Squadron on 18 February 1942
Activated on 15 April 1942
Redesignated 1st Ferrying Squadron on 12 May 1943
Disbanded on 1 April 1944
Reconstituted and consolidated with the 1st Balloon Squadron and the 1st Airborne Command Control Squadron as the 1st Airborne Command Control Squadron on 19 September 1985
1st Airborne Command Control Squadron
Constituted as the 1st Airborne Command Control Squadron on 9 May 1969
Activated on 1 July 1969
Consolidated with the 1st Balloon Squadron and the 1st Ferrying Squadron on 19 September 1985
Assignments
Unknown, 25 September 1917
Balloon Wing, I Army Corps, July 1918
Balloon Group, I Army Corps, 8 October 1918
Balloon Group, III Army Corps, c. 20 November 1918 – 16 April 1919
Balloon School, Ross Field, California (later, Air Service Balloon Observers School), July 1919
Ninth Corps Area, 30 June–25 July 1922
Sixth Corps Area, 17 May 1929
Field Artillery School, June 1929
III Air Support Command (attached to Field Artillery School), 1 September 1941 – 6 February 1942
California Sector, Air Corps Ferrying Command (later 6th Ferrying Group), 15 April 1942 – 1 April 1944
1st Composite Wing, 1 July 1969
55th Strategic Reconnaissance Wing, 1 November 1975
55th Operations Group, 1 September 1991
595th Command and Control Group, 6 October 2016
Stations
Fort Omaha, Nebraska, 25 September 1917
Garden City, New York, 30 November–7 December 1917
Camp de Souge, Gironde, France, 3 January 1918
Brouville, France, 15 April 1918
Les Ecoliers (near Montreuil-aux-Lions), France, 19 July 1918
Epaux-Bezu, France, 22 July 1918
Épieds, France, 25 July 1918
Artois Ferme (near Courpoil), France, 28 July 1918
Mareuil-en-Dole, France, 5 August 1918
Courcelles-sur-Vesle, France, 13 August 1918
Tremblecourt, France, 23 August 1918
La Queue de Theinard (near Domevre-en-Haye), France, 29 August 1918
Bois de Brule (near Neuvilly-en-Argonne), France, 27 September 1918
Varennes-en-Argonne, France, 2 October 1918
Chatel-Chehery, France, 11 October 1918
Auzeville-en-Argonne, France, 17 October 1918
Mercy-le-Bas, France, 21 November 1918
Euren, Germany, 8 December 1918
Niederburg (near Koblenz), Germany, 19 December 1918
Colombey-les-Belles, France, 17 April 1919
St Nazaire, France, c. 5 May 1919–c. late May 1919
Camp Lee, Virginia, c. 6 June 1919
Ross Field, California, July 1919 – 25 July 1922
Scott Field, Illinois, 17 May 1929
Post Field, Oklahoma, 24 June 1929 – 6 February 1942
Long Beach Army Air Base, California, 15 April 1942 – 1 April 1944
Andrews Air Force Base, Maryland, 1 July 1969
Offutt Air Force Base, Nebraska, 1 July 1977 – present
Aircraft and Balloons
Type R Observation Balloon, 1918-1919, 1919-1922
A-6 Spherical Balloon, 1929-1942
A-7 Spherical Balloon, 1929-1942
C-3 Observation Balloon, 1929-c. 1939
C-6 Observation Balloon, 1937, 1938-c. 1942
D-2 Barrage Balloon, 1939
D-3 Barrage Balloon, 1940-1942
D-4 Barrage Balloon, 1940-1942
D-5, Barrage Balloon, 1940-1942
D-6 Barrage Balloon, 1940-1942
Ferried various aircraft, 1942-1944
Boeing EC-135J, 1969-1975
Boeing E-4, 1974 – Present
References
Notes
Explanatory notes
Citations
Bibliography
Military units and formations in Nebraska
001
United States nuclear command and control
Command and control squadrons of the United States Air Force
|
39243863
|
https://en.wikipedia.org/wiki/List%20of%20Women%20in%20Technology%20International%20Hall%20of%20Fame%20inductees
|
List of Women in Technology International Hall of Fame inductees
|
The Women in Technology International Hall of Fame was established in 1996 by Women in Technology International (WITI) to honor women who contribute to the fields of science and technology.
Women in Technology International Hall of Fame inductees
1996
Ruth Leach Amonette (1916–2004), IBM's first woman vice president (1943–1953)
Dr. Eleanor K. Baum (1940–), American electrical engineer and educator. First female dean of (Cooper Union) School of Engineering. First female president of the American Society for Engineering Education
Dr. Jaleh Daie (1948–), Managing Partner, Aurora Equity, a Palo Alto-based investment company financing technology start ups. Treasurer of US Space Foundation (first woman appointed to its Board of Directors). Member of Band of Angels
Dr. Barbara Grant, venture capitalist, former Vice President and General Manager in the Data Storage Division at IBM
Stephanie L. Kwolek (1923–2014), inventor of poly-paraphenylene terephthalamide (Kevlar)
Dr. Misha Mahowald (1963–1996), computational neuroscientist
Linda Sanford (1953–), IBM Enterprise Transformation (see also Linda Sanford's Oral History Interview)
Dr. Cheryl L. Shavers (1953–), Under Secretary for Technology, US Commerce Department (1999–2001)
Dr. Sheila Widnall (1938–), American aerospace researcher and Institute Professor at Massachusetts Institute of Technology. United States Secretary of the Air Force (1993–1997) (first female Secretary of the Air Force). First woman to lead an entire branch of the U.S. military in the Department of Defense
Dr. Chien-Shiung Wu (1912–1997), Chinese-American physicist who worked on Manhattan Project
1997
Frances Allen (1932–2020), American computer scientist and pioneer in the field of optimizing compilers (see also Frances Allen's Oral History Interview)
Carol Bartz (1948– ), former president and CEO of Yahoo!, former chairman, president and CEO at Autodesk
The ENIAC Programmers: The original six women programmers of ENIAC (Electronic Numerical Integrator And Computer), first general-purpose electronic digital computer
Kathleen Antonelli (1921–2006)
Jean Jennings Bartik (1924–2011)
Frances Snyder Holberton (1917–2001)
Marlyn Wescoff Meltzer (1922–2008)
Frances Bilas Spence (1922–2012)
Ruth Lichterman Teitelbaum (1924–1986)
Pamela Meyer Lopker, Founder, President and Chairman of the Board, QAD Inc., an Enterprise Resource Planning / manufacturing software company
Marcia Neugebauer (1932–), American geophysicist whose research yielded first direct measurements of solar wind and shed light on its physics and interaction with comet
Donna Shirley (1941–), former manager of Mars Exploration at the NASA Jet Propulsion Laboratory (see also Donna Shirley Oral History Interview at NASA Oral History Project: "Herstory", Donna Shirley Interviews, Mars Exploration Program)
Shaunna Sowell, former Vice President & Manager of Worldwide semiconductor Facilities, Texas Instruments
Patty Stonesifer (1956–), former Co-Chair and Chief Executive Officer of Bill and Melinda Gates Foundation, current President and CEO of Martha's Table
Patricia Wallington, former Corporate Vice President and CIO, Xerox Corporation
Rosalyn S. Yalow (1921–2011), American medical physicist, and co-winner of 1977 Nobel Prize in Physiology or Medicine (together with Roger Guillemin and Andrew Schally) for development of the radioimmunoassay (RIA) technique. She was the second American woman to be awarded the Nobel Prize Physiology or Medicine after Gerty Cori
1998
Dr. Anita Borg (1949–2003), American computer scientist who founded the Institute for Women and Technology (now the Anita Borg Institute for Women and Technology) and the Grace Hopper Celebration of Women in Computing
Mildred Spiewak Dresselhaus (1930–2017), Institute Professor and Professor of Physics and Electrical Engineering (Emeritus) in the area of condensed matter physics at Massachusetts Institute of Technology. (see also Vegas Science Trust video interviews with scientists: Mildred Dresselhaus)
Dr. Gertrude B. Elion (1918–1999), American biochemist and pharmacologist; 1988 recipient of Nobel Prize in Physiology or Medicine. Research led to the development of AIDS drug AZT
Julie Spicer England former Vice President, Texas Instruments, Incorporated General Manager, RFid Systems
Eleanor Francis Helin (1932–2009), American astronomer who was principal investigator of Near-Earth Asteroid Tracking (NEAT) program of NASA's Jet Propulsion Laboratory
1999
Yvonne Claeys Brill (1924–2013), Canadian scientist known for development of rocket and jet propulsion technologies at NASA and the International Maritime Satellite Organization. (see also National Science & Technology Medals Foundation video)
Sherita T. Ceasar, Vice President Product Engineering Planning and Strategy, Comcast Communications
Dr. Thelma Estrin (1924–2014), computer scientist and engineer who pioneered work in expert systems and biomedical engineering. She was one of the first to apply computer technology to healthcare and medical research
Dr. Claudine Simson, former Executive Vice President, Chief Technology Officer, LSI Corporation; current Director & Business Development Executive, Research and IP, Worldwide Growth Markets, IBM Corporation
Yukako Uchinaga, Vice President, IBM's Yamato Software Development Laboratory (see also Yukako Uchinaga's Oral History Interview)
2000
Dr. Bonnie Dunbar (1949–), former NASA astronaut; former President and CEO of The Museum of Flight. Leads the University of Houston's STEM Center (science, technology, engineering and math) and joined the faculty of the Cullen College of Engineering. (see also Q&A with Dr. Bonnie Dunbar, University of Houston Cullen College of Engineering)
Dr. Irene Greif, Founder of field of Computer-Supported Cooperative Work (CSCW). IBM Fellow; Director, Collaborative User Experience research and IBM Center for Social Business.
Dr. Darleane C. Hoffman (1926–), American nuclear chemist among researchers who confirmed existence of Seaborgium, element 106
Dr. Jennie S. Hwang, first woman to receive Ph.D. from Case Western Reserve University's Materials Science and Engineering; expert in surface-mount technology
Dr. Shirley Ann Jackson (1946–), President of Rensselaer Polytechnic Institute. American physicist. First African-American to serve as Chairman of U.S. Nuclear Regulatory Commission, elected to U.S. National Academy of Engineering, and to receive Vannevar Bush Award. She is first African-American woman to lead a top-50 national research university
2001
Duy-Loan Le (1962–), Vietnamese American engineer and first woman and Asian to be elected to rank of Texas Instruments Senior Fellow
Janet Perna, former General Manager of Information Management Solutions at [IBM] specializing in Distributed database systems / IBM DB2 (see also Janet Perna's oral history)
Darlene Solomon, Senior Vice President, Chief Technology Officer, Agilent Technologies specializing in Bio-analytical and electronic measurement
2002
Judy Estrin, American business executive, JLabs, LLC. Former Chief Technology Officer for Cisco Systems
Dr. Caroline Kovac, former General Manager, IBM Healthcare and Life Sciences (see also Caroline Kovac's oral history)
Dr. Elaine Surick Oran, Senior Scientist, Reactive Flow Physics, U.S. Naval Research Lab, Laboratory for Computational Physics and Fluid Dynamics
2003
Chieko Asakawa (1958–), IBM Fellow. Group Leader, IBM Tokyo Research Laboratory, Accessibility Research; developed IBM Home Page Reader, a self-voicing web browser designed for people who are blind (see also Japanese Wikipedia entry)
Wanda Gass, Texas Instruments Fellow; Executive Director and Founder, High-Tech High Heels ("HTHH"), a donor-advised fund at Dallas Women's Foundation that funds programs to prepare girls to pursue degrees in Science, Technology, Engineering and Math (STEM) (see also Wanda Gass oral history)
Dr. Kristina M. Johnson (1957-), American former government official, academic, engineer, and business executive
Shirley C. McCarty, aerospace consultant
2004
Dr. Mary-Dell Chilton, Ph.D. (1939–), founder of modern plant biotechnology and genetic modification; known as the "queen of Agrobacterium"
Eileen Gail de Planque, Ph.D. (1944–2010), expert on environmental radiation measurements; first woman and first health physicist to become a Nuclear Regulatory Commission Commissioner; technical areas of expertise include solid state dosimetry, radiation transport and shielding, environmental radiation, nuclear facilities monitoring and problems of reactor and personnel dosimetry
Dr. Pat Selinger, IBM Fellow; American computer scientist best known for her work on relational database management systems (see also Patricia Selinger oral history)
Judy Shaw, Director, CMOS Module Development at Texas Instruments
Dr. Susan Solomon (1956–2010), atmospheric chemist; first to propose chlorofluorocarbon free radical reaction mechanism as cause of Antarctic ozone hole
2005
Barbara Bauer, technology innovation, software development, global management
Sonja Bernhardt OAM (1959–), Australian information technology executive; founder and Inaugural President of WiT (Women in Technology) in Queensland
Sandra Burke Ph.D., cardiovascular physiologist, former pre-clinical cardiovascular researcher at Abbott Vascular's Research and Advanced Development; developed drug-coated stent intravascular stents for treatment of restenosis
Melendy Lovett, senior vice president of Texas Instruments; President of Texas Instruments's worldwide Education Technology business; STEM education and workforce advocate, High-Tech High Heels (HTHH)
Amparo Moraleda Martínez (1964–), former COO Iberdrola International Division; former president for Southern Europe, IBM (see also Spanish language Wikipedia entry)
Neerja Raman, global manufacturing and poverty. Senior Research Fellow, Stanford University; Advisor, Committee for Cyber-Infrastructure, National Science Foundation; formerly HP Labs
2006
Maria Azua, former IBM Vice President of Advanced Cloud Solutions, former IBM VP of Technology & Innovation; patent in Transcoder technology, Java implementation and enhancements, data manipulation
Françoise Barré-Sinoussi (1947–) French virologist; Director of Regulation of Retroviral Infections Division (Unité de Régulation des Infections Rétrovirales) at Institut Pasteur. Nobel Prize in Physiology or Medicine (2008) for discovery of virus responsible for HIV
Kim Jones, former President & Managing Director for Sun Microsystems UK & Ireland; former VP of Global Education, Government and Health Sciences, Sun Microsystems; Chairman of the Board and Chief Executive Officer of Curriki
Nor Rae Spohn, former SVP Hewlett-Packard LaserJet Printing Business
Dr. Been-Jon Woo, Director, Technology Integration & Development, Intel
2007
Dr. Wanda M. Austin (1954–), first African American President and CEO, The Aerospace Corporation
Helen Greiner (1967–), Co-Founder of iRobot; CEO of CyPhyWorks, maker of the hover drone. Director of the Board, Open Source Robotics Foundation (see also National Center for Women & Information Technology (NCWIT), Interview with Helen Greiner)
Lucy Sanders, CEO and Co-Founder of National Center for Women & Information Technology (NCWIT); Executive-in-Residence for ATLAS Institute at University of Colorado at Boulder
Padmasree Warrior, Chief Technology & Strategy Officer of Cisco Systems; former CTO of Motorola, Inc.
2008
Deborah Estrin (1959–), Ph.D., works in networked sensors. First academic faculty member at Cornell Tech; Founding Director, Center for Embedded Networked Sensing, UCLA. Winner of a 2018 MacArthur Fellowship.
Dr. Susan P. Fisher-Hoch (1940-), expert on infectious diseases; Professor of Epidemiology, The University of Texas Health Science Center School of Public Health
Mary Lou Jepsen, Head of Display Division at Google X Lab; founder of Pixel Qi, a manufacturer of low-cost, low-power LCD screens for laptops; Co-Founder and first Chief Technology Officer One Laptop per Child (OLPC) (see also TED talk)
Gordana Vunjak-Novakovic, Serbian American professor of biomedical engineering, Columbia University; Director, Columbia's Laboratory for Stem Cells and Tissue Engineering. Areas of research: tissue engineering, bioreactors, biophysical regulation, tissue development, stem cell research.
Jian (Jane) Xu, Ph.D., CTO, IBM China Systems and Technology Labs; Distinguished Engineer of IBM Watson Research, focusing on the research of IT and Wireless Convergence
2009
Patricia S. Cowings (1948–), first African-American female scientist to be trained as an astronaut payload specialist; Research Psychologist, Human Systems Integrations Division, NASA Ames Research Center
Maxine Fassberg, Vice President, Technology and Manufacturing Group, Fab 28 Plant Manager; General Manager, Intel Israel
Dr. Sharon Nunes, VP, IBM's Smarter Cities Strategy & Solutions, which focuses on improving quality of life at urban centers worldwide by partnering with city governments to improve transportation, waste management and energy use
Dr. Carolyn Turbyfill, VP Engineering, Stacksafe
2010
Sandy Carter, IBM's worldwide VP, Social Business Evangelism and Sales, IBM’s Social Business initiative
Dr. Ruth A. David, President and CEO, ANSER (Analytic Services Inc); Member, Homeland Security Advisory Council; former Deputy Director for Science and Technology, CIA.
Adele Goldberg (1945–), computer scientist; participated in developing programming language Smalltalk-80 and various concepts related to object-oriented programming while a researcher at Xerox Palo Alto Research Center (PARC), in the 1970s, then founding Chairman, ParcPlace Systems, Inc.
Susie Wee, CTO, Cisco Systems; former CTO, Client Cloud Services, HP Labs. Focus on streaming media; co-edited JPSEC standard for JPEG-2000 image security (see also TED TEDxBayArea Women talk)
Dr. Ruth Westheimer (born Karola Siegel (born 1928); known as "Dr. Ruth"), German-American sex therapist, talk show host, author, professor, Holocaust survivor, and former Haganah sniper.
2011
Alicia Abella, Ph.D., Executive Director, Innovative Services Research, AT&T Labs; Member, President's Advisory Commission on Educational Excellence for Hispanics
Evelyn Berezin (1925–2018), American computer engineer best known for designing one of the first word processors. She also helped design some of the first computer reservations systems, computer data systems for banks; Management Consultant, Brookhaven Science Associates (BSA) (BSA manages Brookhaven National Laboratory for Department of Energy's Office of Science)
Diane Pozefsky, Ph.D., Research Professor, Department of Computer Science, University of North Carolina; specialized in networking technologies at IBM (see also IBM Diane Pozefsky oral history)
Sophie V. Vandebroek, Ph.D., CTO and President, Xerox Innovation Group, Xerox Corporation
Lynda Weinman (1955–), Co-Founder and Executive Chair, Lynda.com, an online software training web site
2012
Genevieve Bell, Ph.D., Australian anthropologist and researcher. Intel Fellow; Director, User Interaction and Experience, Intel Labs, Intel Corporation
Joanne Martin, Ph.D. (1947–). Served on management team that developed and delivered IBM's first supercomputer, with specific responsibility for the performance measurement and analysis of the system. Distinguished Engineer and VP of Technology, IBM Corporation
Jane Lubchenco (1947-), Ph.D. Ukrainian-American environmental scientist and marine ecologist; first woman Administrator of National Oceanic and Atmospheric Administration (NOAA); (see also Charlie Rose interview); Haas Distinguished Visitor, Stanford University
Gwynne Shotwell (1963-), President, SpaceX (see also Shotwell: The Future of Space talk at Northwestern)
2013
Marian Croak, Senior Vice President of Applications & Services Infrastructure at AT&T Labs
Peggy Johnson, Executive Vice President of Qualcomm Technologies and President of Global Market Development
Lisa McVey, CIO of Enterprise Information Systems, Enterprise Medical Imaging, Automation, McKesson Corporation
Heidi Roizen (1958-), Venture Partner of Draper Fisher Jurvetson
Laura Sanders, General Manager of Delivery Engineering & Technology and CTO for Global Technology Services, IBM Corporation
2014
Orna Berry (1949-), Israeli Corporate Vice President, Growth and Innovation, EMC Centers of Excellence EMEA, EMC Corporation
Jennifer Pahlka (1969-), Founder and Executive Director, Code for America
Kim Polese (1961-), Chair, ClearStreet
Kris Rinne, Senior Vice President, Network & Product Planning, AT&T Services, Inc.
Lauren States, Vice President, Strategy and Transformation, IBM Software Group
2015
Cheemin Bo-Linn, President & CEO, Peritus Partners
Nichelle Nichols (1932-), American actor
Pam Parisian, Chief Information Officer, AT&T
Sheryl Root, President and CEO, RootAnalysis
Marie Wieck, General Manager, Middleware, IBM
2016
Kimberly Bryant Founder and Executive Director, Black Girls Code
Roberta Banaszak Gleiter CEO, Global Institute For Technology & Engineering
Harriet Green OBE, General Manager, IBM
Jennifer Yates, Ph.D., Assistant Vice President, AT&T
Ellie Yieh, Corporate Vice President, Applied Materials
2017
Beena Ammanath, Global Vice President, Hewlett Packard Enterprise
Krunali Patel Vice President, Texas Instruments
Lisa Seacat DeLuca, Strategist, IBM
Selma Svendsen Senior Director, iRobot
Elizabeth Xu, Chief technology officer, BMC Software
2018
Rhonda Childress, IBM Fellow VP - GTS Data Security and Privacy Officer, IBM
Elizabeth "Jake" Feinler, Internet Pioneer
Roz Ho, Senior VP and GM, Consumer & Metadata, TiVo
Santosh K. Kurinec, Rochester Institute of Technology, Professor of Electrical & Microelectronic Engineering
Yanbing Li, Ph.D., Sr VP and General Manager, Storage and Availability Business Uni, VMware
Rashmi Rao, Global Head, Advanced Engineering, CoC User Experience, Harman
2019
Heather Hinton, Vice President & IBM Distinguished Engineer, IBM.
Julia Liuson, Corporate Vice President, Developer Tools, Microsoft.
Dr. Sara Rushinek, Professor of Business Technology and Health Informatics, University of Miami.
Dr. Natalia Trayanova, Professor of Biomedical Engineering and Medicine, Johns Hopkins University
Blanca Treviño, President & CEO, Softtek
2021
Arundhati Bhattacharya, Chairperson and Chief Executive Officer for the State Bank of India
Lisa P. Jackson, EPA Administrator
Olu Maduka, Founding Board Member and current Chairman of the Board of Women in Energy, Oil, and Gas Nigeria (WEOG)
Karen Quintos, Senior Vice President and Chief Marketing Office - Dell Inc.
Angie Ruan, Vice President of Engineering at Chime
Lisa T. Su, Ph.D., President & CEO, Advanced Micro Devices
Kara Swisher, Editor-at-large of New York Medi
Tae Yoo, Senior Vice President, Corporate Affairs & Corporate Social Responsibility, Cisco
References
External links
WITI website
WITI Hall of Fame website
Lists of engineers
Lists of hall of fame inductees
Technology International
Science and technology hall of fame inductees
Women's halls of fame
|
7327254
|
https://en.wikipedia.org/wiki/Ralph%20Glaze
|
Ralph Glaze
|
Daniel Ralph Glaze (March 13, 1881 – October 31, 1968) was an American sportsman and coach who played as a right-handed pitcher in Major League Baseball, and later became a football and baseball coach and administrator at several colleges.
Early life and playing career
Glaze was born in Denver, Colorado, and was recruited by Dartmouth College after displaying his skill in two sports. He played football at the University of Colorado in the 1901 season under coach Fred Folsom, a Dartmouth alumnus who became that school's coach in 1903. Glaze enrolled at Dartmouth in 1902, being followed there by his younger brother, John. Under Folsom, he played a notable role in the school's first-ever football victory over Harvard in 1903, a game in which Harvard dedicated its new stadium. In 1905, Glaze was named an All-American as an end by Walter Camp, even though at 5'8" and 153 pounds he was the smallest player on Dartmouth's team that year. Glaze also played baseball at Dartmouth, and pitched a no-hitter against Columbia.
During summers, Glaze played semi-pro ball in Colorado, using the assumed name "Ralph Pearce" to protect his college eligibility. Among the Colorado teams Glaze played for was the "Big Six" team in Trinidad, where he pitched in 1905. In 1905 he met an opposing catcher named John Tortes, a Native American, and encouraged him to apply to Dartmouth due to the school's charter making specific provisions for the education of Native Americans. As Tortes had dropped out of school, several Dartmouth alumni conspired to create a false background for him, and he enrolled until the ruse was discovered some time after his first semester. Nonetheless, the catcher attracted notice from various baseball figures, and he went on to a 9-year major league career from 1909 to 1917 under the name Chief Meyers; he maintained a strong affinity to Dartmouth, and credited Glaze with his start in the sport.
After graduating in 1906, Glaze signed with the Boston Americans, as the press referred to them in 1906. The team would later be known as the Boston Red Sox. Over three years, Glaze posted a record of 15 wins against 21 losses, with 137 strikeouts and a 2.89 earned run average in 61 games and 340 innings pitched. A career highlight took place on August 31 of his rookie year, when he outdueled Philadelphia Athletics star pitcher Rube Waddell. Glaze began coaching in the offseasons, starting as a 1906 football assistant at Dartmouth; he also helped coach their baseball team in 1908. He left the Red Sox following the 1908 season, and spent the next several years with a number of minor league teams.
Coaching career
In 1910, Glaze became the football coach at Baylor University. His teams had a record of 12–10–3 from 1910 through 1912, including a 6–1-1 mark in his first year. Glaze became the head coach of the University of Southern California's football team for the 1914 and 1915 seasons, compiling a 7–7 record. He was the first coach after USC's teams began to be known as the Trojans. Before his arrival, USC had not played football for the previous three seasons; like many universities at the time, the school had switched to rugby and did not field football teams during the 1911 through 1913 seasons. After competing primarily against southern California teams throughout its history, USC was now beginning to include major colleges from other areas on its schedule. The 1914 season finale at Oregon State was the first against a major college opponent since a 1905 loss at Stanford, and was also USC's first game ever outside of California. The highlight of Glaze's brief tenure occurred the following year with the inauguration of the long-standing series with California. At the time, Cal was considered the traditionally dominant team of West Coast football, and Glaze managed to lead USC to a 28–10 road victory before falling to Cal, 23–21, at home later the same season; however, it was Cal's first year resuming football after having switched to rugby for the previous nine seasons.
Glaze was succeeded in 1916 by Dean Cromwell, who was USC's football coach before the switch to rugby. Glaze also coached the Trojans baseball team, represented by the university's law school, in the 1915 season to a 5–10 record, and guided the USC track team the same spring. He also coached the USC basketball team in 1915–16, with a record of 8–21 against exclusively southern California competition.
Glaze became football coach at Drake University in 1916, with a record of 3–10–2, and then became football coach at Colorado State Teachers College (now the University of Northern Colorado) in 1917–18 as the school resumed football after 11 years, with a record of 2–6. He coached football at the Colorado School of Mines in 1919–20, with a record of 0–10–2. From 1921 to 1924 he coached at Lake Forest College, leading the football team to a 10–12–3 record from 1921 to 1923, and the basketball team to an 11–32 mark from 1921 to 1924.
During his career, Glaze also coached at the University of Rochester, Texas Christian University and St. Viator College.
Marriage, later life, and death
Glaze married Evaline Leavitt in 1907; she died in 1927, the year he retired from coaching to go into business in Denver. In 1930, he became superintendent of the Boston and Maine Railroad's terminal in Charlestown, Massachusetts, and he married Winifred Bonar Demuth the same year. In 1946 the couple retired to California, moving to Cambria, California in 1951. In his later years, Glaze struck up a friendship with former American League outfielder Sam Crawford, who had a cottage several miles away; coincidentally, Crawford had been one of Glaze's successors as USC's baseball coach. Glaze stayed fit, walking three to five miles daily with his dogs when he was in his 80s. He died at age 86 in Atascadero, California.
Head coaching record
College football
References
External links
1881 births
1968 deaths
Baylor Bears football coaches
Colorado Mines Orediggers football coaches
Drake Bulldogs football coaches
Lake Forest Foresters football coaches
Northern Colorado Bears football coaches
Rochester Yellowjackets football coaches
St. Viator Irish football coaches
USC Trojans football coaches
Major League Baseball pitchers
Boston Red Sox players
Baseball players from Denver
Beaumont Oilers players
Indianapolis Indians players
Providence Grays (minor league) players
Topeka Jayhawks players
Minor league baseball managers
College men's basketball head coaches in the United States
Baylor Bears men's basketball coaches
Colorado Mines Orediggers men's basketball coaches
Drake Bulldogs men's basketball coaches
Lake Forest Foresters men's basketball coaches
Northern Colorado Bears men's basketball coaches
USC Trojans men's basketball coaches
American men's basketball coaches
Basketball coaches from Colorado
Baylor Bears baseball coaches
Colorado Mines Orediggers baseball coaches
USC Trojans baseball coaches
USC Trojans track and field coaches
American football ends
Dartmouth Big Green football players
Colorado Buffaloes football players
All-American college football players
Players of American football from Denver
Dartmouth Big Green baseball players
Sportspeople from Denver
Louisville Coal Miners players
|
18993948
|
https://en.wikipedia.org/wiki/BWPing
|
BWPing
|
BWPing is a tool to measure bandwidth and response times between two hosts using Internet Control Message Protocol (ICMP) echo request/echo reply mechanism. It does not require any special software on the remote host. The only requirement is the ability to respond on ICMP echo request messages. BWPing supports both IPv4 and IPv6 networks.
Command syntax
bwping [ -4 | -6 ] [ -B bind_addr ] [ -I ident ] [ -T tos(v4) | traf_class(v6) ] [ -r reporting_period ] [ -u buf_size ] -b kbps -s pkt_size -v volume target
bwping6 [ -4 | -6 ] [ -B bind_addr ] [ -I ident ] [ -T tos(v4) | traf_class(v6) ] [ -r reporting_period ] [ -u buf_size ] -b kbps -s pkt_size -v volume target
Available options are:
-4 - Forces IPv4 mode. Default mode of operation is IPv4 for bwping and IPv6 for bwping6 otherwise.
-6 - Forces IPv6 mode. Default mode of operation is IPv4 for bwping and IPv6 for bwping6 otherwise.
-B - Sets the source address of outgoing ip packets. By default the address of the outgoing interface will be used.
-I - Sets the Identifier value of outgoing ICMP Echo Request packets. If zero, the value of the lower 16 bits of the process ID will be used (default).
-T - Sets the TOS value of outgoing IPv4 packets or IPv6 Traffic Class value of outgoing IPv6 packets. Default value is zero.
-r - Sets the interval time in seconds between periodic bandwidth, RTT, and loss reports. If zero, there will be no periodic reports (default).
-u - Sets the size of the socket send/receive buffer in bytes. If zero (default), the system default will be used. Tune this parameter if the speed measurement results are unexpectedly low or packet loss occurs.
-b - Sets the transfer speed in kilobits per second.
-s - Sets the size of ICMP packet (excluding IPv4/IPv6 header) in bytes.
-v - Sets the volume to transfer in bytes.
License
This utility is available under BSD License.
Notes
Although BWPing does not require any special software on the remote host (only the ability to respond on ICMP echo request messages), there are some special requirements to network infrastructure, local and remote host performance:
There should be no ICMP echo request/reply filtering on the network; this includes quality of service (QoS) mechanisms (which often affects ICMP) at any point in the testing path.
Local host should have enough CPU resources to send ICMP echo request messages with given rate, and remote host should quickly respond on these messages and should have no ICMP bandwidth limiting turned on.
Each bwping and bwping6 process should use its own ICMP Echo Request Identifier value to reliably distinguish between ICMP Echo Reply packets destined for each of these processes.
If some of these requirements are not satisfied then the measurement results will be inadequate or fail completely. In general, for testing bandwidth where QoS is implemented, always test with traffic that matches the QoS class to be tested.
See also
iperf: A tool for TCP/UDP bandwidth measurement.
ttcp: Another tool for network bandwidth measurement.
References
External links
BWPing website
Computer network analysis
Network performance
Software using the BSD license
|
2570455
|
https://en.wikipedia.org/wiki/F5%2C%20Inc.
|
F5, Inc.
|
F5, Inc. is an American technology company specializing in application security, multi-cloud management, online fraud prevention, application delivery networking (ADN), application availability & performance, network security, and access & authorization.
F5 is headquartered in Seattle, Washington in F5 Tower, with an additional 75 offices in 43 countries focusing on sales, support, development, manufacturing, and administrative jobs. Notable office locations include Spokane, Washington; New York, New York; Boulder, Colorado; London, England; San Jose, California; and San Francisco, California.
F5's originally offered application delivery controller (ADC) technology, but expanded into application layer, automation, multi-cloud, and security services. As ransomware, data leaks, DDoS, and other attacks on businesses of all sizes are arising, companies such as F5 have continued to reinvent themselves. While the majority of F5's revenue continues to be attributed to their hardware products such as the BIG-IP iSeries systems, the company has begun to offer additional modules on their proprietary operating system, TMOS (Traffic Management Operating System.) These modules are listed below and include, but are not limited to, Local Traffic Manager (LTM), Advanced Web Application Firewall (AWAF), DNS (previously named GTM), and Access Policy Manager (APM). These offer organizations running the BIG-IP the ability to deploy load balancing, Layer 7 application firewalls, single sign-on (for Azure AD, Active Directory, LDAP, and Okta), as well as enterprise-level VPNs. While the BIG-IP was traditionally a hardware product, F5 now offers it as a virtual machine, which they have branded as the BIG-IP Virtual Edition. The BIG-IP Virtual Edition is cloud agnostic and can be deployed on-premises in a public and/or hybrid cloud environment.
F5's customers include Microsoft, Oracle, Alaska Airlines, Tesla, and Meta.
Corporate history
F5, Inc., originally named "F5 Labs" and formerly branded "F5 Networks, Inc." was established in 1996. Currently, the company's public facing branding generally presents the company as just "F5."
In 1997, F5 launched its first product, a load balancer called BIG-IP. BIG-IP served the purpose of reallocating server traffic away from overloaded servers. In June 1999, the company had its initial public offering and was listed on the NASDAQ stock exchange with symbol FFIV.
In 2017, François Locoh-Donou replaced John McAdam as president and CEO. Later in 2017, F5 launched a dedicated site and organization focused on gathering global threat intelligence data, analyzing application threats, and publishing related findings, dubbed “F5 Labs” in a nod to the company's history. The team continues to research application threats and publish findings every week. On May 3, 2017, F5 announced that it would move from its longtime headquarters on the waterfront near Seattle Center to a downtown Seattle skyscraper that will be called F5 Tower. The move occurred in early 2019.
F5 employees include Igor Sysoev, the author of NGINX; Dahl-Nygaard laureate Gilad Bracha; Google click fraud czar Shuman Ghosemajumder; and Defense.Net founder Barrett Lyon.
48 of the Fortune 50 companies use F5 for load balancing, Layer 7 application security, fraud prevention, and API management.
Product Offerings
F5 BIG-IP
F5's BIG-IP product family comprises hardware, modularized software, and virtual appliances that run the F5 TMOS operating system. Depending on the appliance selected, one or more BIG-IP product modules can be added.
Offerings include:
BIG-IP Local Traffic Manager (LTM): Local load balancing with caching, compression and tcp acceleration, based on a full-proxy architecture.
BIG-IP DNS: An intelligent global site load balancing (GSLB) and authoritative DNS server. Distributes DNS and application requests based on user, network, and cloud performance conditions.
BIG-IP Advanced Firewall Manager (AFM): On-premises DDoS protection and data center firewall .
BIG-IP Access Policy Manager (APM): Provides access control and authentication for HTTP and HTTPS applications.
Advanced WAF: An advanced web application firewall with cutting-edge technology.
Container Ingress Service (CIS): Provides automation, orchestration, and networking services for container deployments.
IP Intelligence (IPI): Blocking known bad IP addresses, prevention of phishing attacks and botnets.
BIG-IQ: a framework for managing BIG-IP devices and application services, irrespective of their form factors (hardware, software or cloud) or deployment model (on-premises, private/public cloud or hybrid). BIG-IQ supports integration with other ecosystem participants such as public cloud providers, and orchestration engines through cloud connectors and through a set of open RESTful APIs. BIG-IQ uses a multi-tenant approach to management. This allows organizations to move closer to IT as a Service without concern that it might affect the stability or security of the services fabric.[24]
BIG-IP History
On September 7, 2004, F5 Networks released version 9.0 of the BIG-IP software in addition to appliances to run the software. Version 9.0 also marked the introduction of the company's TMOS architecture, with enhancements including:
Moved from BSD to Linux to handle system management functions (disks, logging, bootup, console access, etc.)
Creation of a Traffic Management Microkernel (TMM) to directly talk to the networking hardware and handle all network activities.
Creation of the standard full-proxy mode, which fully terminates network connections at the BIG-IP and establishes new connections between the BIG-IP and the member servers in a pool. This allows for optimum TCP stacks on both sides as well as the complete ability to modify traffic in either direction.
In late 2021, F5 introduced the next-generation of their BIG-IP hardware platforms, the rSeries and VELOS chassis platform. These next-generation systems will replace the previous generation iSeries and VIPRION chassis system.
F5 NGINX
As a part of the NGINX, Inc. acquisition in 2019, F5 offers a premium, enterprise-level version of NGINX with advanced features, multiple support SLAs, and regular software updates. Hourly and annual subscription options are available with multiple levels of support, professional services, and training.
F5 Distributed Cloud Services
During F5 Agility 2022, F5 announced a new product offering being built on the platforms of BIG-IP, Shape Security, and Volterra. The first new product available to the market will be the SaaS-based Web Application and API Protection (WAAP) solution. F5 Distributed Cloud Services are SaaS-based security, networking, and application management services that enable customers to deploy, secure, and operate their applications in a cloud-native environment wherever needed–data center, multi-cloud, or the network or enterprise edge.
Acquisitions
NGINX, Inc.
In March 2019, F5 acquired NGINX, Inc., the company responsible for widely used open-source web server software, for $670 million.
Shape Security, Inc.
In January 2020, F5 acquired Shape Security, Inc., an artificial intelligence-based bot detection company, for $1 billion. It also sells products to protect applications against fraud.
Volterra, Inc.
In January 2021, F5 acquired Volterra, Inc., an edge networking company, for $500 million. It sells SaaS security services.
Threat Stack, Inc.
In October 2021, F5 acquired Threat Stack, Inc., a Boston cloud computing security startup company for a reported $68 million.
References
External links
1999 initial public offerings
American companies established in 1996
Software companies established in 1996
Companies listed on the Nasdaq
Computer security companies
DDoS mitigation companies
Deep packet inspection
Networking companies of the United States
Networking hardware companies
Networking software companies
Software companies based in Seattle
1996 establishments in Washington (state)
Software companies of the United States
|
46591042
|
https://en.wikipedia.org/wiki/Bup
|
Bup
|
Bup is a Backup system written in Python. It uses several formats from Git but is capable of handling very large files like operating system images. It has block-based deduplication and optional par2-based error correction.
History
Bup development began in 2010 and was accepted to Debian the same year.
Design
Bup uses the git packfile format writing packfiles directly, avoiding garbage collection.
Availability
Bup is available from source and notably part of the following distributions
Debian
Ubuntu
Arch Linux
pkgsrc (NetBSD etc.)
See also
List of backup software
Comparison of backup software
References
External links
bup website on GitHub
2010 software
Free backup software
Backup software for Linux
Python (programming language) software
|
13454849
|
https://en.wikipedia.org/wiki/MUMmer
|
MUMmer
|
MUMmer is a bioinformatics software system for sequence alignment. It is based on the suffix tree data structure and is one of the fastest and most efficient systems available for this task, enabling it to be applied to very long sequences. It has been widely used for comparing different genomes to one another. In recent years it has become a popular algorithm for comparing genome assemblies to one another, which allows scientists to determine how a genome has changed after adding more DNA sequence or after running a different genome assembly program. The acronym "MUMmer" comes from "Maximal Unique Matches", or MUMs. The original algorithms in the MUMMER software package were designed by Art Delcher, Simon Kasif and Steven Salzberg. Mummer was the first whole genome comparison system developed in Bioinformatics. It was originally applied to comparison of two related strains of bacteria.
The MUMmer software is open source and can be found at the MUMmer home page. The home page also has links to technical papers describing
the system. The system is maintained primarily by Steven Salzberg and Arthur Delcher at Center for Computational Biology at Johns Hopkins University.
MUMmer is a highly cited bioinformatics system in the scientific literature. According to Google Scholar, as of early 2013 the original MUMmer paper (Delcher et al., 1999) has been cited 691 times; the MUMmer 2 paper (Delcher et al., 2002) has been cited 455 times; and the MUMmer 3.0 article (Kurtz et al., 2004) has been cited 903 times.
Overview
What is MUMmer?
Have you wonder, how are you related to a monkey? If the answer is yes, I can tell you that there are algorithms in which you can find that answer. If you had algorithms classes “Edit distance” (by Wunch and Needleman) might come to mind; which can be used for this same topic. However, the “Edit distance” algorithm is used to compare small sequences, it would take a long time to compute genome alignments.
Genomes are a large sequence of genetic instructions/information about an organism (in a chromosome); now, imagine “comparing a 4 Mb sequence such as M. Tuberculosis to another 4 Mb sequence, many algorithms either runs out of memory or takes too long to complete” (Arthur et al., 1999). Back then 4Mb of memory was a huge deal; so, they had to do something about that, like creating this type of algorithms. Researchers have been creating algorithms to compare genome sequences. Mummer is a fast algorithm mostly used for the rapid alignment of entire genomes.
MUMmer, as mentioned before, is the system/process for efficient alignment of big-size sequences, this algorithm has been used to make discoveries about genome structure. The algorithm is really well designed and it is capable of comparing alignments within seconds. Since this algorithm is relatively new, this algorithm has four versions, one better than the previous one.
Versions of MUMmers
MUMmer1
MUMmer1 or just MUMmer consists of three parts, the first part consists of the creation of suffix trees (to get MUMs), the second part in the longest increasing subsequence or longest common subsequences (to order MUMs), lastly any alignment to close gaps.
To start the process, as a reminder, we need to get two Genomes as input. Once we receive them, we are going to search for the maximal unique matches (MUMs). MUMs’ algorithm has its own logic since it can be done in several ways. For instance, the naïve algorithm goes through all the sequences from one genome and compares it with the other sequences, which takes O(m*m2) which m and m2 are the two genomes. However, since MUMmer has to be fast, it uses the data structure called suffix tree (Suffix tree) which takes O(m+m2). From this tree, we extract the MUMs (which are the subsequences that are represented by one internal node by both sequences).
Once we identify the MUMs, we need to choose a genome and sort (and possibly remove) the MUMs based on the position of the other genome. This step can be done easily by choosing a genome and enumerate the nodes (MUMs) in ascending order, and the other one we enumerate each node according to the node from the first sequence. In other words, we enumerate a pair of MUMs (with the exact match from both genomes) with the same value regardless of the position. Then we sort. To make this happen, we can choose any of the methods available. For example, this can be done with Long Common Subsequence (Longest common subsequence problem) or Long Increasing subsequence(LIS). At the end of this process, the MUMs on both sequences are aligned/ordered equally (some MUMs are ignored for now, since we cannot have a value enumerated with a big number before one enumerated with a small number). The best time complexity for this step is O(nlogn).
Finally, we have to deal with the interruption between MUMs-alignment, which may be known as gaps. We employ other alignment algorithms that can fill these gaps. Arthur and his team (Arthur et al., 1999), mention that the gaps fall in the following four classes:
An SNPinterruption – when comparing two sequences, one character will differ. In other words, characters before and after that character will be equal.
An insertion – when comparing two sequences, there is a subsequence in which only appears in one of the sequences. It would be an empty gap in the other sequence at the moment of comparison of the two sequences.
A highly polymorphic region – when comparing two sequences, there can be found a subsequence in which every single character differs.
A repeat – it’s the repetition of a sequence. Since MUMs can only take unique sequences, that gap can be one repetition of one of the MUMs.
MUMmer 2
This algorithm was redesigned to require less memory, the process runs faster and is more accurate than the first one; It also allows for bigger genomes alignment.
The improvement was the amount stored in the suffix trees by employing the one created by Kurtz. For this tree, the insertion is different since it is just a plugin of just one sequence, the other one is more to compare (is like adding it, but we are not, we are comparing the characters within the tree). It reduces space complexity since it is storing one sequence.
Finally, the first MUMmer algorithm only aligned the two sequences and jumped to another step. However, to achieve better coverage, the MUMs sequences are becoming clusters (meaning that is a group of MUMs that are separated with a smaller number of gaps).
MUMmer 3
According to Stefan Kurtz and his teammates, “the most significant technical improvement in MUMmer 3.0, is a complete rewrite of the suffix-tree code, based on the compact suffix- tree representation of”
the tree described in the article “Reducing the space requirement of suffix trees”.
Another improvement was the relaxation of the MUMs. This means that now the user has the option of finding all non-unique maximal matches, all the matches that are unique only in a chosen sequence, or original MUMs (which are unique for both sequences). This was added to avoid some kind of missing subsequences that were copies of MUMs.
MUMmer 4
According to Guillaume and his team, there are some extra improvements in the implementation and also innovation with Query parallelism. The first, and biggest, improvement is the raise of the size limit for the sequence is being tested. Finally, “MUMmer4 now includes options to save and load the suffix array for a given reference”(Marcais et al., 2018). Thanks to this, the suffix tree can be built once and constructed again after running it from the saved suffix tree.
Software - Open Source
MUMmer has open-source software. The best way to begin playing with this package is to go to it website is given below.
MUMmer open source
This page talks about the open-source, requirements and steps to the installation, the process to run the package, and some examples of running sequences. There is more information on this website. If you get stuck, the website has information on who to contact in those cases. This page has valuable information to work with this algorithm.
Related Topics
There are other types of sequence alignments, some of the related topics are shown below:
Edit distance
BLAST
Bowtie
BWA
Blat
Mauve
LASTZ
BLAST
References
External links
MUMmer home page
MUMmer2
Book
MUMmer
Software
MUMmer3
MUMmer1
MUMmer4
Bioinformatics software
|
970031
|
https://en.wikipedia.org/wiki/Byzantine%20fault
|
Byzantine fault
|
A Byzantine fault (also Byzantine generals problem, interactive consistency, source congruency, error avalanche, Byzantine agreement problem, and Byzantine failure) is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine generals problem", developed to describe a situation in which, in order to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable.
In a Byzantine fault, a component such as a server can inconsistently appear both failed and functioning to failure-detection systems, presenting different symptoms to different observers. It is difficult for the other components to declare it failed and shut it out of the network, because they need to first reach a consensus regarding which component has failed in the first place.
Byzantine fault tolerance (BFT) is the dependability of a fault-tolerant computer system to such conditions. It has applications especially in cryptocurrency.
Analogy
In its simplest form, a number of generals are attacking a fortress and must decide as a group only whether to attack or retreat. Some generals may prefer to attack, while others prefer to retreat. The important thing is that all generals agree on a common decision, for a halfhearted attack by a few generals would become a rout, and would be worse than either a coordinated attack or a coordinated retreat.
The problem is complicated by the presence of treacherous generals who may not only cast a vote for a suboptimal strategy, they may do so selectively. For instance, if nine generals are voting, four of whom support attacking while four others are in favor of retreat, the ninth general may send a vote of retreat to those generals in favor of retreat, and a vote of attack to the rest. Those who received a retreat vote from the ninth general will retreat, while the rest will attack (which may not go well for the attackers). The problem is complicated further by the generals being physically separated and having to send their votes via messengers who may fail to deliver votes or may forge false votes.
Resolution
Byzantine fault tolerance can be achieved if the loyal (non-faulty) generals have a majority agreement on their strategy. There can be a default vote value given to missing messages. For example, missing messages can be given a "null" value. Further, if the agreement is that the null votes are in the majority, a pre-assigned default strategy can be used (e.g. retreat).
The typical mapping of this story onto computer systems is that the computers are the generals and their digital communication system links are the messengers. Although the problem is formulated in the analogy as a decision-making and security problem, in electronics, it cannot be solved simply by cryptographic digital signatures, because failures such as incorrect voltages can propagate through the encryption process. Thus, a component may appear functioning to one component and faulty to another, which prevents forming a consensus as to whether the component is faulty or not.
Characteristics
A Byzantine fault is any fault presenting different symptoms to different observers. A Byzantine failure is the loss of a system service due to a Byzantine fault in systems that require consensus.
The objective of Byzantine fault tolerance is to be able to defend against failures of system components with or without symptoms that prevent other components of the system from reaching an agreement among themselves, where such an agreement is needed for the correct operation of the system.
The remaining operationally correct components of a Byzantine fault tolerant system will be able to continue providing the system's service as originally intended, assuming there are a sufficient number of accurately-operating components to maintain the service.
Byzantine failures are considered the most general and most difficult class of failures among the failure modes. The so-called fail-stop failure mode occupies the simplest end of the spectrum. Whereas fail-stop failure mode simply means that the only way to fail is a node crash, detected by other nodes, Byzantine failures imply no restrictions, which means that the failed node can generate arbitrary data, including data that makes it appear like a functioning node. Thus, Byzantine failures can confuse failure detection systems, which makes fault tolerance difficult. Despite the analogy, a Byzantine failure is not necessarily a security problem involving hostile human interference: it can arise purely from electrical or software faults.
The terms fault and failure are used here according to the standard definitions originally created by a joint committee on "Fundamental Concepts and Terminology" formed by the IEEE Computer Society's Technical Committee on Dependable Computing and Fault-Tolerance and IFIP Working Group 10.4 on Dependable Computing and Fault Tolerance. See also dependability.
Caveat
Byzantine fault tolerance is only concerned with broadcast consistency, that is, the property that when one component broadcasts a single consistent value to other components (i.e., sends the same value to the other components), they all receive exactly the same value, or in the case that the broadcaster is not consistent, the other components agree on a common value. This kind of fault tolerance does not encompass the correctness of the value itself; for example, an adversarial component that deliberately sends an incorrect value, but sends that same value consistently to all components, will not be caught in the Byzantine fault tolerance scheme.
Formal definition
Setting:
Given a system of components, of which are dishonest, and assuming only point-to-point channel between all the components.
Whenever a component tries to broadcast a value , the other components are allowed to discuss with each other and verify the consistency of 's broadcast, and eventually settle on a common value .
Property: The system is said to resist Byzantine faults if a component can broadcast a value , and then:
If is honest, then all honest components agree on the value .
In any case, all honest components agree on the same value .
Variants: The problem has been studied in the case of both synchronous and asynchronous communications.
The communication graph above is assumed to be the complete graph (i.e. each component can discuss with every other), but the communication graph can be restricted.
It can also be relaxed in a more "realistic" problem where the faulty components do not collude together in an attempt to lure the others into error. It is in this setting that practical algorithms have been devised.
History
The problem of obtaining Byzantine consensus was conceived and formalized by Robert Shostak, who dubbed it the interactive consistency problem. This work was done in 1978 in the context of the NASA-sponsored SIFT project in the Computer Science Lab at SRI International. SIFT (for Software Implemented Fault Tolerance) was the brain child of John Wensley, and was based on the idea of using multiple general-purpose computers that would communicate through pairwise messaging in order to reach a consensus, even if some of the computers were faulty.
At the beginning of the project, it was not clear how many computers in total were needed to guarantee that a conspiracy of n faulty computers could not "thwart" the efforts of the correctly-operating ones to reach consensus. Shostak showed that a minimum of 3n+1 are needed, and devised a two-round 3n+1 messaging protocol that would work for n=1. His colleague Marshall Pease generalized the algorithm for any n > 0, proving that 3n+1 is both necessary and sufficient. These results, together with a later proof by Leslie Lamport of the sufficiency of 3n using digital signatures, were published in the seminal paper, Reaching Agreement in the Presence of Faults. The authors were awarded the 2005 Edsger W. Dijkstra Prize for this paper.
To make the interactive consistency problem easier to understand, Lamport devised a colorful allegory in which a group of army generals formulate a plan for attacking a city. In its original version, the story cast the generals as commanders of the Albanian army. The name was changed, eventually settling on "Byzantine", at the suggestion of Jack Goldberg to future-proof any potential offense giving. This formulation of the problem, together with some additional results, were presented by the same authors in their 1982 paper, "The Byzantine Generals Problem".
Examples
Several examples of Byzantine failures that have occurred are given in two equivalent journal papers. These and other examples are described on the NASA DASHlink web pages.
Byzantine errors were observed infrequently and at irregular points during endurance testing for the newly constructed Virginia class submarines, at least through 2005 (when the issues were publicly reported).
Early solutions
Several solutions were described by Lamport, Shostak, and Pease in 1982. They began by noting that the Generals' Problem can be reduced to solving a "Commander and Lieutenants" problem where loyal Lieutenants must all act in unison and that their action must correspond to what the Commander ordered in the case that the Commander is loyal:
One solution considers scenarios in which messages may be forged, but which will be Byzantine-fault-tolerant as long as the number of disloyal generals is less than one third of the generals. The impossibility of dealing with one-third or more traitors ultimately reduces to proving that the one Commander and two Lieutenants problem cannot be solved, if the Commander is traitorous. To see this, suppose we have a traitorous Commander A, and two Lieutenants, B and C: when A tells B to attack and C to retreat, and B and C send messages to each other, forwarding A's message, neither B nor C can figure out who is the traitor, since it is not necessarily A—the other Lieutenant could have forged the message purportedly from A. It can be shown that if n is the number of generals in total, and t is the number of traitors in that n, then there are solutions to the problem only when n > 3t and the communication is synchronous (bounded delay).
A second solution requires unforgeable message signatures. For security-critical systems, digital signatures (in modern computer systems, this may be achieved in practice using public-key cryptography) can provide Byzantine fault tolerance in the presence of an arbitrary number of traitorous generals. However, for safety-critical systems (where "security" addresses intelligent threats while "safety" addresses the inherent dangers of an activity or mission), simple error detecting codes, such as CRCs, provide weaker but often sufficient coverage at a much lower cost. This is true for both Byzantine and non-Byzantine faults. Furthermore, sometimes security measures weaken safety and vice versa. Thus, cryptographic digital signature methods are not a good choice for safety-critical systems, unless there is also a specific security threat as well. While error detecting codes, such as CRCs, are better than cryptographic techniques, neither provide adequate coverage for active electronics in safety-critical systems. This is illustrated by the Schrödinger CRC scenario where a CRC-protected message with a single Byzantine faulty bit presents different data to different observers and each observer sees a valid CRC.
Also presented is a variation on the first two solutions allowing Byzantine-fault-tolerant behavior in some situations where not all generals can communicate directly with each other.
Several system architectures were designed c. 1980 that implemented Byzantine fault tolerance. These include: Draper's FTMP, Honeywell's MMFCS, and SRI's SIFT.
Advanced solutions
In 1999, Miguel Castro and Barbara Liskov introduced the "Practical Byzantine Fault Tolerance" (PBFT) algorithm, which provides high-performance Byzantine state machine replication, processing thousands of requests per second with sub-millisecond increases in latency.
After PBFT, several BFT protocols were introduced to improve its robustness and performance. For instance, Q/U, HQ, Zyzzyva, and ABsTRACTs, addressed the performance and cost issues; whereas other protocols, like Aardvark and RBFT, addressed its robustness issues. Furthermore, Adapt tried to make use of existing BFT protocols, through switching between them in an adaptive way, to improve system robustness and performance as the underlying conditions change. Furthermore, BFT protocols were introduced that leverage trusted components to reduce the number of replicas, e.g., A2M-PBFT-EA and MinBFT.
Motivated by PBFT, Tendermint BFT was introduced for partial asynchronous networks and it is mainly used for Proof of Stake blockchains.
BFT implementations
One example of BFT in use is bitcoin, a peer-to-peer digital cash system. The bitcoin network works in parallel to generate a blockchain with proof-of-work allowing the system to overcome Byzantine failures and reach a coherent global view of the system's state.
Some aircraft systems, such as the Boeing 777 Aircraft Information Management System (via its ARINC 659 SAFEbus network),
the Boeing 777 flight control system, and the Boeing 787 flight control systems use Byzantine fault tolerance; because these are real-time systems, their Byzantine fault tolerance solutions must have very low latency. For example, SAFEbus can achieve Byzantine fault tolerance within the order of a microsecond of added latency. The SpaceX Dragon considers Byzantine fault tolerance in its design.
Byzantine fault tolerance mechanisms use components that repeat an incoming message (or just its signature) to other recipients of that incoming message. All these mechanisms make the assumption that the act of repeating a message blocks the propagation of Byzantine symptoms. For systems that have a high degree of safety or security criticality, these assumptions must be proven to be true to an acceptable level of fault coverage. When providing proof through testing, one difficulty is creating a sufficiently wide range of signals with Byzantine symptoms. Such testing likely will require specialized fault injectors.
See also
Atomic commit
Brooks–Iyengar algorithm
List of terms relating to algorithms and data structures
Byzantine Paxos
Quantum Byzantine agreement
Two Generals' Problem
References
External links
Byzantine Fault Tolerance in the RKBExplorer
Public-key cryptography
Distributed computing problems
Fault-tolerant computer systems
Theory of computation
|
2323056
|
https://en.wikipedia.org/wiki/Jimmy%20Thackery
|
Jimmy Thackery
|
Jimmy Thackery (born May 19, 1953, Pittsburgh, Pennsylvania, United States) is an American blues singer, guitarist and songwriter.
Career
Thackery spent fourteen years as part of The Nighthawks, the Washington, D.C. based blues and roots rock ensemble. After leaving the Nighthawks in 1986, Thackery toured under his own name.
Born in Pittsburgh and raised in Washington, Thackery co-founded The Nighthawks with Mark Wenner in 1972 and went on to record over twenty albums with them. In 1986 he began touring with The Assassins, a six-piece original blues, rock and R&B ensemble which he had previously helped start as a vacation band when The Nighthawks took one of their rare breaks. Originally billed as Jimmy Thackery and The Assassins, the band toured the U.S. Northeast, Mid-Atlantic, South, and Texas regions. The Assassins released a variety of recordings on the Seymour record label, two on vinyl (No Previous Record and Partners in Crime) and the 1989 CD Cut Me Loose.
In the wake of the Assassins 1991 break-up, Thackery has been leading a trio, Jimmy Thackery and the Drivers, whose early recordings were for the San Francisco, California based Blind Pig Records. In 2002 Thackery released, We Got It, his first album on Telarc and in 2006, In the Natural State with Earl and Ernie Cate on Rykodisc. In 2007, he released Solid Ice again with The Drivers. His latest album, Spare Keys, was released in 2016.
Discography
1985: Sideways in Paradise (first pressing, Seymour no catalog #) (while still with The Nighthawks, this album with John Mooney)
1992: Empty Arms Motel
1993: Sideways in Paradise (with John Mooney)
1994: Trouble Man
1995: Wild Night Out
1996: Drive To Survive
1996: Partners in Crime (with Tom Principato)
1998: Switching Gears
2000: Sinner Street
2000: That's It! (with David Raitt)
2002: We Got It
2002: Whiskey Store" with Tab Benoit)
2003: Guitar (Instrumental/Compilation)
2003: True Stories 2004: Whiskey Store Live (with Tab Benoit)
2005: Healin' Ground 2006: In the Natural State (with the Cate Brothers)
2007: Solid Ice 2008: Live! 2008 2008: Inside Tracks 2010: Live in Detroit 2011: Feel the Heat 2014: Wide Open 2016: Spare Keys''
See also
The Nighthawks
References
External links
Official website
1953 births
Living people
American blues guitarists
American male guitarists
American blues singers
Musicians from Pittsburgh
Singers from Washington, D.C.
Singers from Pennsylvania
Guitarists from Washington, D.C.
Guitarists from Pennsylvania
20th-century American guitarists
20th-century American male musicians
|
125332
|
https://en.wikipedia.org/wiki/NuBus
|
NuBus
|
NuBus (pron. 'New Bus') is a 32-bit parallel computer bus, originally developed at MIT and standardized in 1987 as a part of the NuMachine workstation project. The first complete implementation of the NuBus was done by Western Digital for their NuMachine, and for the Lisp Machines Inc. LMI Lambda. The NuBus was later incorporated in Lisp products by Texas Instruments (Explorer), and used as the main expansion bus by Apple Computer and a variant called NeXTBus was developed by NeXT. It is no longer widely used outside the embedded market.
Architecture
Early microcomputer buses like S-100 were often just connections to the pins of the microprocessor and to the power rails. This meant that a change in the computer's architecture generally led to a new bus as well. Looking to avoid such problems in the future, NuBus was designed to be independent of the processor, its general architecture and any details of its I/O handling.
Among its many advanced features for the era, NuBus used a 32-bit backplane when 8- or 16-bit busses were common. This was seen as making the bus "future-proof", as it was generally believed that 32-bit systems would arrive in the near future while 64-bit buses and beyond would remain impractical and excessive.
In addition, NuBus was agnostic about the processor itself. Most buses up to this point conformed to the signalling and data standards of the machine they were plugged into (being big or little endian for instance). NuBus made no such assumptions, which meant that any NuBus card could be plugged into any NuBus machine, as long as there was an appropriate device driver.
In order to select the proper device driver, NuBus included an ID scheme that allowed the cards to identify themselves to the host computer during startup. This meant that the user didn't have to configure the system, the bane of bus systems up to that point. For instance, with ISA the driver had to be configured not only for the card, but for any memory it required, the interrupts it used, and so on. NuBus required no such configuration, making it one of the first examples of plug-and-play architecture.
On the downside, while this flexibility made NuBus much simpler for the user and device driver authors, it made things more difficult for the designers of the cards themselves. Whereas most "simple" bus systems were easily supported with a handful of input/output chips designed to be used with that CPU in mind, with NuBus every card and computer had to convert everything to a platform-agnostic "NuBus world". Typically this meant adding a NuBus controller chip between the bus and any I/O chips on the card, increasing costs. While this is a trivial exercise today, one that all newer buses require, in the 1980s NuBus was considered needlessly complex and expensive.
Implementations
The NuBus became a standard in 1987 as IEEE 1196. This version used a standard 96-pin three-row connector, running the system on a 10 MHz clock for a maximum burst throughput of 40 MB/s and average speeds of 10 to 20 MB/s. A later addition, NuBus 90, increased the clock rate to 20 MHz for better throughput, burst increasing to about 70 MB/s, and average to about 30 MB/s.
The NuBus was first developed commercially in the Western Digital NuMachine, and first used in a production product by their licensee, Lisp Machines, Inc., in the LMI-Lambda, a Lisp Machine. The project and the development group was sold by Western Digital to Texas Instruments in 1984. The technology was incorporated into their TI Explorer, also a Lisp Machine. In 1986, Texas Instruments used the NuBus in the S1500 multiprocessor UNIX system. Later, both Texas Instruments and Symbolics developed Lisp Machine NuBus boards (the TI MicroExplorer and the Symbolics MacIvory) based on their Lisp supporting microprocessors. These NuBus boards were co-processor Lisp Machines for the Apple Macintosh line (the Mac II and Mac Quadras).
NuBus was also selected by Apple Computer for use in their Macintosh II project, where its plug-n-play nature fit well with the Mac philosophy of ease-of-use. It was used in most of the Macintosh II series that made up the professional-level Mac lineup from the late 1980s. It was upgraded to NuBus 90 starting with the Macintosh Quadras and used into the mid-1990s. Early Quadras only supported the 20 MHz rate when two cards were talking to each other, since the motherboard controller was not upgraded. This was later addressed in the NuBus implementation on the 660AV and 840AV models. This improved NuBus controller was used in the first generation Power Macintosh 6100, 7100 and 8100 models. Later Power Mac models adopted Intel's PCI bus. Apple's NuBus implementation used pin and socket connectors on the back of the card rather than edge connectors with Phillips screws inside the case that most cards use, making it much easier to install cards. Apple's computers also supplied an always-on +5 V "trickle" power supply for tasks such as watching the phone line while the computer was turned off. This was apparently part of an unapproved NuBus standard.
NuBus was also selected by NeXT Computer for their line of machines, but used a different physical PCB layout. NuBus appears to have seen little use outside these roles, and when Apple switched to PCI in the mid 1990s, NuBus quickly disappeared.
See also
Amiga Zorro II (Amiga Autoconfig bus)
Industry Standard Architecture (ISA)
Extended Industry Standard Architecture (EISA)
Micro Channel architecture (MCA)
VESA Local Bus (VESA)
Processor Direct Slot (PDS)
Peripheral Component Interconnect (PCI)
Accelerated Graphics Port (AGP)
PCI Express (PCIe)
List of device bandwidths
References
NuBus specs
External links
Developing for the Macintosh NuBus
Pictures of several NuBus cards at applefritter
Computer buses
Motherboard expansion slot
Macintosh internals
NeXT
Apple Inc. hardware
IEEE standards
|
1879364
|
https://en.wikipedia.org/wiki/Veterans%20Memorial%20Stadium%20%28Troy%20University%29
|
Veterans Memorial Stadium (Troy University)
|
Veterans Memorial Stadium at Larry Blakeney Field is a stadium in Troy, Alabama. It is primarily used for American football, and is the home field of the Troy University Trojans. The seating capacity is 30,420. The stadium was originally built in 1950, and has regularly been expanded, renovated and improved since then. The stadium was named in honor of the college students and local residents who gave their lives during World War II. The field received its name from retired head coach Larry Blakeney, the coach with the most wins in Troy history.
History
Early history
Veterans Memorial Stadium was originally dedicated in 1950 to the Troy State Teachers College students and Pike County residents who had died in World War II. The stadium solely consisted of a small, 5,000-seat grandstand on the west side of the running track, and was built into the natural slope of the ground. It has been expanded or renovated several times over the past few decades.
1998 renovation
In 1998, the stadium underwent a major renovation. A large upper deck was added on the west side of the stadium, increasing capacity from 12,000 seats to 17,500 seats. A new scoreboard with a small video board was also added.
Costs for the 1998 expansion of the stadium were financed in part by a substantial donation from HealthSouth founder Richard M. Scrushy. The playing field (but not the stadium) was renamed for Scrushy, but this became a public relations problem for Troy University when Scrushy was forced out of his position due to alleged financial misdeeds at HealthSouth in 2003; he was later tried for these, but acquitted. (Scrushy was later convicted of other unrelated crimes, along with former Alabama governor Don Siegelman.)
2003 renovation
Renovations were again carried out in 2003, just two seasons after the Trojans make the move to Division I-A (FBS). The old press box area, which had cut into the 1998 upper deck, was filled in with chair-back seats. A much larger, 6-story press box/box tower was built behind the newly-completed upper deck. The track was removed and the field was lowered, and permanent seating was placed over the old berm area behind the south grandstand. The east grandstand seating was completely demolished and rebuilt, adding a new lower deck and upper deck. As a result of the seat additions and renovations, the stadium's seating capacity expanded to 30,000. The stadium was now a flattened "U" shape. A large-screen end zone replay board was installed in 2003 in the North end zone, along with a state-of-the-art Danley sound system.
The natural grass surface was also removed in 2003, being replaced with AstroPlay synthetic grass. Troy was one of the first schools to feature the synthetic grass on a football field. The AstroPlay surface was then replaced by the ProGrass synthetic turf system in 2012.
Construction costs for the 2003 renovation/expansion were financed in part by the sale of naming rights to the video rental chain Movie Gallery. Because of this, Scrushy's name is no longer on the field. Movie Gallery's name was removed after the company filed for bankruptcy and ceased operations in 2010, at which point the venue reverted to its original name.
2012/2014 renovations
During this time, no major renovations were performed that would affect seating capacity. A few moderate renovations were performed, with the first being that the AstroPlay synthetic grass surface was replaced by the new ProGrass synthetic turf system in 2012.
In 2014, the next renovation was to add a Daktronics 15HD LED video board in the eastern corner of the south end zone above the lower stands. In addition to this, new long Daktronics Slim-LED video ribbon boards were installed front of both the east and west upper decks. A new sound system was also installed.
2017 renovation
On November 12, 2016 ground was broken on the $24-million North End zone Facility with a completion date for 2018. The facility will add 402 club-level seats and the plans call for a first-floor that houses a strength and conditioning area and nutrition station; a second-floor locker room, sports medicine facility, team lounge and "cool-down" plunge pool; and a third floor filled with coaches’ offices, meeting rooms, video services and a recruiting lounge. A Daktronics 15HD LED video board is also being installed on the top of the facility, which will be the largest video board in the Sun Belt Conference, as well as one of the largest video boards among Group of Five programs.
Features
Six-story box/press box tower that houses 27 sky-boxes, a media hosting facility, a Club area that houses more than 1,000 guests, and floors dedicated to sports medicine, academics, strength and conditioning, and media relations.
Three-story north end zone facility with 402 stadium-club seats, locker rooms, strength & conditioning center, athletic training facility, nutrition station, cool-down pools, team lounge, recruiting lounge, football staff offices, and meeting rooms. with Daktronics installed a video board in the corner of the south end zone. Also installed were of video ribbon boards that were placed on the front of both the west and east upper decks from end zone to end zone. A new sound system was also installed.
ProGrass synthetic grass turf.
Daktronics 15HD LED video board in the north end zone ()
Daktronics 15HD LED video board in the south end zone corner ()
Two Daktronics Slim-LED ribbon boards on both east and west upper decks.
Tailgate Terrace just outside of the stadium main entrance.
Multiple concession areas serving a variety of foods and drinks, including Domino's Pizza and Chick-fil-A. Troy is also one of the less than 30 universities that sell alcoholic-drinks during football games. The university has an exclusive deal with Anheuser-Busch.
Attendance records
The largest crowd to see a Troy football game in Veterans Memorial Stadium was 29,612 on September 1, 2018, when the Trojans hosted Boise State. Troy lost by a final score of 59-20.
Gameday traditions
Trojan Walk
Before each Troy home football game, hundreds of Troy fans and students line University Avenue on campus to cheer on the team as they march with the Sound of the South band and cheerleaders from the Quad to Tailgate Terrace, surrounded by fans who pat them on the back and shake their hands as they walk toward Veterans Memorial Stadium.
Trojan Fanfare
During the pre-game show at Veterans Memorial Stadium, the Sound of the South will perform what is known as the "Trojan Fanfare." It is a favorite among most fans and energizes the fanbase leading up to kickoff.
"Havoc!"
One of the more popular traditions of gameday, during the pre-game show the band marches onto the field to prepare for the football team to run out of the gates. The band falls silent, and the announcer then recites the phrase from William Shakespeare's Julius Caesar. Fans in the stadium will yell out "Havoc!" in unison along with the announcer before the last line of the phrase:
Trojan Warrior
Before every game and after every touchdown, the Trojan Warrior or Trojan Princess would blaze down the football field on a horse named "Big Red." This tradition is no longer used because the football field turf was changed from real grass to artificial grass.
Blue–Gray Football Classic
The stadium hosted the last Blue–Gray Football Classic in 2003 after moving from Cramton Bowl in Montgomery, Alabama, where the game had been played for nearly 62 straight years. The annual college football all-star game was cancelled by the Lions Club of Montgomery, Alabama due to the lack of a title sponsor.
Gallery
See also
List of NCAA Division I FBS football stadiums
References
College football venues
Troy Trojans football
High school football venues in the United States
American football venues in Alabama
Monuments and memorials in Alabama
Sports venues completed in 1950
1950 establishments in Alabama
|
1635860
|
https://en.wikipedia.org/wiki/Webalizer
|
Webalizer
|
The Webalizer is web log analysis software, which generates web pages of analysis, from access and usage logs. It is one of the most commonly used web server administration tools. It was initiated by Bradford L. Barrett in 1997. Statistics commonly reported by Webalizer include hits, visits, referrers, the visitors' countries, and the amount of data downloaded. These statistics can be viewed graphically and presented by different time frames, such as by day, hour, or month.
Overview
Website traffic analysis is produced by grouping and aggregating various data items captured by the web server in the form of log files while the website visitor is browsing the website.
The Webalizer analyzes web server log files, extracting such items as client's IP addresses, URL paths, processing times, user agents, referrers, etc. and grouping them in order to produce HTML reports.
Web servers log HTTP traffic using different file formats. Common file formats are Common Log Format (CLF), the Apache Custom Log Format, and Extended Log File Format. An example of a CLF log line is shown below.
192.168.1.20 - - [26/Dec/2006:03:09:16 -0500] "GET HTTP/ 1.1" 200 1774
Apache Custom Log Format can be customized to log most HTTP parameters, including request processing time and the size of the request itself. The format of a custom log is controlled by the format line. A typical Apache log format configuration is shown below.
LogFormat "%a %l \"%u\" %t %m \"%U\" \"%q\" %p %>s %b %D \"%{Referer}i\" \"%{User-Agent}i\"" my_custom_log
CustomLog logs/access_log my_custom_log
Microsoft's Internet Information Services (IIS) web server logs HTTP traffic in W3C Extended Log File Format. Similarly to Apache Custom Log format, IIS logs may be configured to capture such extended parameters as request processing time. W3C extended logs may be recognized by the presence of one or more format lines, such as the one shown below.
#Fields: date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) cs(Referer) sc-status sc-bytes cs-bytes time-taken
The Webalizer can process CLF, Apache and W3C Extended log files, as well as HTTP proxy log files produced by Squid servers. Other log file formats are usually converted to CLF in order to be analyzed. In addition, logs compressed with either GZip (.gz) or BZip2 (.bz2) can be processed directly without the need to uncompress before use.
Command line
The Webalizer is a command line application and is launched from the operating system shell prompt. A typical command is shown below. webalizer -p -F clf -n en.wikipedia.org -o reports logfiles/access_log This command instructs The Webalizer to analyze the log file access_log, run in the incremental mode (-p), interpret the log as a CLF log file (-F), use the domain name en.wikipedia.org for report links (-n) and produce the output subdirectory of the current directory. Use the -h option to see the complete list of command line options.
Configuration
Besides the command line options, the Webalizer may be configured through parameters of a configuration file. By default, The Webalizer reads the file webalizer.conf and interprets each line as a processing instruction. Alternatively, a user-specified file may be provided using the -c option.
For example, if the webmaster would like to ignore all requests made from a particular group of hosts, he or she can use the IgnoreSite parameter to discard all log records with the IP address matching the specified pattern:
IgnoreSite 192.168.0.*
There are over one hundred available configuration parameters, which make The Webalizer a highly configurable web traffic analysis application. For a complete list of configuration parameters please refer to the README file shipped with every source or binary distribution.
Reports
By default, The Webalizer produces two kinds of reports - a yearly summary report and a detailed monthly report, one for each analyzed month.
The yearly summary report provides such information as the number of hits, file and page requests, hosts and visits, as well as daily averages of these counters for each month. The report is accompanied by a yearly summary graph.
Each of the monthly reports is generated as a single HTML page containing a monthly summary report (listing the overall number of hits, file and page requests, visits, hosts, etc.), a daily report (grouping these counters for each of the days of the month), an aggregated hourly report (grouping counters for the same hour of each day together), a URL report (grouping collected information by URL), a host report (by IP address), website entry and exit URL reports (showing most common first and last visit URLs), a referrer report (grouping the referring third-party URLs leading to the analyzed website), a search string report (grouping items by search terms used in such search engines as Google), a user agent report (grouping by the browser type) and a country report (grouping by the host's country of origin).
Each of the standard HTML reports described above lists only top entries for each item (e.g. top 20 URLs). The actual number of lines for each of the reports is controlled by configuration. The Webalizer may also be configured to produce a separate report for each of the items, which will list every single item, such as all website visitors, all requested URLs, etc.
In addition to HTML reports, The Webalizer may be configured to produce comma-delimited dump files, which list all of the report data in a plain-text file. Dump files may be imported to spreadsheet applications or databases for further analysis.
Internationalization
HTML reports may be produced reports in over 30 languages, including Catalan, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, German, Greek, Hungarian, Icelandic, Indonesian, Italian, Japanese, Korean, Latvian, Malay, Norwegian, Polish, Portuguese, Portuguese (Brazil), Romanian, Russian, Serbian, Simplified Chinese, Slovak, Slovene, Spanish, Swedish, Turkish, Ukrainian.
To generate reports in an alternate language requires a separate webalizer binary compiled specifically for that language.
Criticism
Generated statistics do not differentiate between human visitors and robots. As a result, all reported metrics are higher than those due to people alone. Many webmasters claim that webalizer produces highly unrealistic figures of visits, which are sometimes 200 to 900% higher than the data produced by Javascript based web statistics such as Google Analytics or StatCounter.
Reported hits are too high for download managers with segmented downloads; each 206 "Partial Content" is reported as one hit.
No query string analysis. Dynamically generated websites can not be listed separately (e.g. PHP pages with arguments).
See also
List of web analytics software
External links
Free web analytics software
|
1947528
|
https://en.wikipedia.org/wiki/Windows%20Live%20OneCare
|
Windows Live OneCare
|
Windows Live OneCare (previously Windows OneCare Live, codenamed A1) was a computer security and performance enhancement service developed by Microsoft for Windows. A core technology of OneCare was the multi-platform RAV (Reliable Anti-virus), which Microsoft purchased from GeCAD Software Srl in 2003, but subsequently discontinued. The software was available as an annual paid subscription, which could be used on up to three computers.
On 18 November 2008, Microsoft announced that Windows Live OneCare would be discontinued on 30 June 2009 and will instead be offering users a new free anti-malware suite called Microsoft Security Essentials to be available before then. However, virus definitions and support for OneCare would continue until a subscription expires. In the end-of-life announcement, Microsoft noted that Windows Live OneCare would not be upgraded to work with Windows 7 and would also not work in Windows XP Mode.
History
Windows Live OneCare entered a beta state in the summer of 2005. The managed beta program was launched before the public beta, and was located on BetaPlace, Microsoft's former beta delivery system. On 31 May 2006, Windows Live OneCare made its official debut in retail stores in the United States.
The beta version of Windows Live OneCare 1.5 was released in early October 2006 by Microsoft. Version 1.5 was released to manufacturing on 3 January 2007 and was made available to the public on 30 January 2007. On 4 July 2007, beta testing started for version 2.0, and the final version was released on 16 November 2007.
Microsoft acquired Komoku on 20 March 2008 and merged its computer security software into Windows Live OneCare.
Windows Live OneCare 2.5 (build 2.5.2900.28) final was released on 3 July 2008. On the same day, Microsoft also released Windows Live OneCare for Server 2.5.
Features
Windows Live OneCare features integrated anti-virus, personal firewall, and backup utilities, and a tune-up utility with the integrated functionality of Windows Defender for malware protection. A future addition of a registry cleaner was considered but not added because "there are not significant customer advantages to this functionality". Version 2 added features such as multi-PC and home network management, printer sharing support, start-time optimizer, proactive fixes and recommendations, monthly reports, centralized backup, and online photo backup.
Windows Live OneCare is built for ease-of-use and is designed for home users. OneCare also attempts a very minimal interface to lessen user confusion and resource use. It adds an icon to the notification area that tells the user at a glance the status of the system's health by using three alert colors: green (good), yellow (fair), and red (at risk).
Compatibility
Version 1.5 of OneCare is only compatible with the 32 bit versions of Windows XP and Windows Vista. Version 2 of OneCare supports 64 bit compatibility to Vista. In version 2.5, Microsoft released Windows Live OneCare for Server which supports Windows Server 2008 Standard 64-bit and Windows Small Business Server 2008 Standard and Premium editions. No edition of OneCare operates in safe mode. Windows Live OneCare will not support Windows 7 as its development has been discontinued and replaced by Microsoft Security Essentials.
Activation
Windows Live OneCare requires users to activate the product if they wish to continue using it after the free trial period (90 days) through a valid Windows Live ID. When the product is activated, the grey message bar at the top of the program disappears. The subscription remains active for 1 year from the date of activation. Windows Live OneCare does not require the operating system to be checked with Windows Genuine Advantage.
Protection
Windows Live OneCare Protection Plus is the security component in the OneCare suite. It consists of three parts:
A personal firewall capable of monitoring and blocking both incoming and outgoing traffic (The built-in Windows Firewall in Windows XP only monitors and blocks incoming traffic)
An anti-virus tool that uses regularly updated anti-virus definition files to protect against malicious software
An anti-spyware tool that uses the Windows Defender engine as a core to protect against potentially unwanted software (In version 1.0, this required the separate installation of Windows Defender and was not integrated into the OneCare interface, although it could be managed and launched from OneCare. Version 1.5 integrated the Windows Defender engine into OneCare and no longer requires separate installation.)
Windows Live OneCare 1.5 onwards also monitors Internet Explorer 7 and 8 security settings and ensures that the automatic website checking feature of the Phishing Filter is enabled.
Performance
Windows Live OneCare Performance Plus is the component that performs monthly PC tune-up related tasks, such as:
Disk cleanup and defragmentation.
A full virus scan using the anti-virus component in the suite.
User notification if files are in need of backing up.
Check for Windows updates by using the Microsoft Update service.
Backup
Windows Live OneCare Backup and Restore is the component that aids in backing up important files. Files can be backed up to various recordable media, such as external hard disks, CDs, and DVDs. When restoring files, the entirety or a subset of them can also be restored to a networked computer, as long as it's running OneCare as well. The Backup and Restore component supports backup software features such as incremental backups and scheduling.
Criticism
Windows Live OneCare has been criticized from both users and competing security software companies.
Microsoft's acquisition of GeCAD RAV, a core technology of OneCare, and their subsequent discontinuation of that product, deprived the Linux platform (and others) of one of its leading virus scanning tools for e-mail servers, bringing Microsoft's ultimate intentions into question.
On 26 January 2006, Windows Live OneCare was criticized by Foundstone (a division of the competing McAfee anti-virus) for the integrated firewall having default white lists which allow Java applications and digitally signed software to bypass user warnings, since neither of those applications carry assurances that they will not have security flaws or be written with a malicious intent. Microsoft has since responded to the criticism, justifying their decision in that Java applications are "widely used by third party applications, and is a popular and trusted program among our users", and that "it is highly unusual for malware to be signed."
Windows Live OneCare has also been criticized for the lack of adherence to industry firewall standards concerning intrusion detection. Tests conducted by Agnitum (the developers of Outpost Firewall) have shown OneCare failing to detect trojans and malware which hijack applications already resident on an infected machine.
In February 2007, the first Windows Vista anti-virus product testing by Virus Bulletin magazine (a sister company of Sophos, the developers of Sophos Anti-Virus) found that Windows Live OneCare failed to detect 18.6% of viruses. Fifteen anti-virus products were tested. To pass the Virus Bulletin's VB100 test, an anti-virus product has to detect 100% of the viruses.
AV-Comparatives also released results that placed Windows Live OneCare last in its testing of seventeen anti-virus products. In response, Jimmy Kuo of the Microsoft Security Research and Response (MSRR) team pledged to add "truly important" ("actively being spread") malware as soon as possible, while "[test detection] numbers will get better and better" for other malware "until they are on par with the other majors in this arena." He also expressed confidence in these improvements: "Soon after, [other majors] will need to catch up to us!"
As of April 2008, Windows Live OneCare has passed the VB100 test under Windows Vista SP1 Business Edition. As of August 2008, Windows Live OneCare placed 14th out of 16 anti-virus products in on-demand virus detection rates. On the other hand, as of May 2009, Windows Live OneCare placed 2nd in a proactive/retrospective performance test conducted by AV-Comparatives. AV-Comparatives.org, the test issuer, denotes that it had "very few false alarms, which is a very good achievement." The publisher also points out that false positives can cause as much harm as genuine infections, and furthermore, anti-virus scanners prone to false alarms essentially achieve higher detection scores.
Community Revival
After Windows Live OneCare was discontinued, end-users of the product could no longer install Windows Live OneCare due to the installer checking Microsoft OneCare's site for updates. This resulted in the installation giving an error message 'Network problems are preventing Windows Live OneCare Installation from continuing at this time'.
A YouTuber by the name 'Michael MJD' posted a review of the software to his second channel 'mjdextras' after finding it at a thrift-store but was unable to get the software installed due to the above error; MJD Community member 'Cobs Server Closet' requested a copy of the installation media to fix this issue.
As of October 20th 2020, 'Cobs Server Closet' posted to their website that they'd successfully recreated a functioning version of the installer, allowing end-users owning existing installation media to reinstall the software. This project was named 'OneCare Rewritten'.
While the OneCare Rewritten software did allow successful installation of OneCare, many of the notable features such as OneCare Circles and built-in Backup feature remain non-functional as a result of being dependant on Microsoft Windows Live OneCare servers.
See also
Windows Defender
Windows Live
Comparison of antivirus software
References
External links
OneCare
Antivirus software
Firewall software
Backup software
Spyware removal
2006 software
|
59109197
|
https://en.wikipedia.org/wiki/Kevin%20Porter%20Jr.
|
Kevin Porter Jr.
|
Bryan Kevin Porter Jr. (born May 4, 2000), also known by his initials KPJ, is an American professional basketball player for the Houston Rockets of the National Basketball Association (NBA). He played high school basketball for Rainier Beach High School and led the Vikings to the state playoffs in each of his four years. He played college basketball for the USC Trojans.
Early life
Porter was born in Seattle, Washington, to Ayanna and Bryan Kevin Porter Sr. His father played football, basketball, and baseball at Rainier Beach High School in Seattle in the 1990s. Porter Jr.'s father, Bryan Kevin Porter Sr., pleaded guilty to first-degree manslaughter in a shooting death of a 14-year-old girl in 1993. He was sentenced to years in prison. In July 2004, when Porter was four years old, his father was shot five times and killed while trying to help someone being attacked. As a result, he was raised by his mother, who became his role model.
High school career
Porter convinced his mother to enroll him at Rainier Beach High School instead of O'Dea High School in Seattle, because his father had played sports there and he wanted to preserve the tradition. In his senior campaign, he averaged 27 points, 14 rebounds, and five assists, as Rainier Beach finished with a 22–7 record. On March 3, 2018, Porter recorded 22 points and 11 rebounds in a Class 3A state championship game loss to Garfield High School. At the end of the season, he was named Washington Mr. Basketball by the state coaches association.
Recruiting
Porter was considered a five-star recruit by recruiting services 247Sports and Rivals and a four-star recruit by ESPN. He was the top-ranked player from Washington in the 2018 class and received offers from several NCAA Division I programs, including UCLA, Oregon, and Washington, before committing to USC. Porter became the first USC player since DeMar DeRozan in 2008 to be rated a five-star recruit by Rivals.
College career
Porter debuted for USC on November 6, 2018, scoring 15 points off the bench on 6-of-7 shooting in an 83–62 win over Robert Morris. On November 20, against Missouri State, he suffered a quadriceps contusion. He returned on December 1 versus Nevada but left after four minutes because he was hindered by the injury. He missed nine games with a quad contusion, and returned again on January 10, 2019, scoring five points in 25 minutes. Three days later, however, he was suspended indefinitely by USC for "personal conduct issues". Regardless, Porter stated that he would finish the season with the team and then played in the last three games of the season. He averaged 9.8 points, four rebounds, and 1.4 assists in 22 minutes a game, playing in 21 of USC's 33 games.
At the conclusion of his freshman season, Porter announced his intention to forgo his remaining collegiate eligibility and declare for the 2019 NBA draft.
Professional career
Cleveland Cavaliers (2019–2021)
In the 2019 NBA draft, Porter was selected 30th overall by the Milwaukee Bucks but was later traded to the Cleveland Cavaliers via the Detroit Pistons. On July 3, 2019, the Cleveland Cavaliers announced that they had signed Porter. On October 23, Porter made his debut in the NBA, playing in a 85–94 loss to the Orlando Magic and finishing with one rebound, two assists, and a steal. On November 4, Porter was suspended for one game without pay for improperly making contact with a game official. His first NBA start for the Cavaliers would come on November 19 against the New York Knicks in a 123–105 loss where he recorded a then career-high 18 points in 31 minutes.
Porter started the 2020–21 season inactive due to his off-season weapons charge, which was later dropped. On January 18, 2021, the Cavaliers announced that Porter would either be traded or released following an outburst regarding a locker change following the Cavaliers' acquisition of Taurean Prince from the Brooklyn Nets. He was ultimately traded to the Rockets three days later having not played a single game with the Cavaliers in the 2020–21 season.
Houston Rockets (2021–present)
On January 21, 2021, Porter was traded to the Houston Rockets for a future top-55 protected second round pick. He was later assigned to the Rockets' G League affiliate, the Rio Grande Valley Vipers, debuting for the Vipers in their season opener on February 10, 2021. On February 25, he recorded the first triple-double of the G League season, scoring 27 points, collecting 11 rebounds and dishing out 14 assists. On April 29, he scored 50 points and recorded 11 assists in a win against the Milwaukee Bucks, becoming the youngest player in NBA history to have 50+ points and 10+ assists in a game.
On November 29, 2021, Porter recorded his first career triple-double, grabbing 11 points, 11 assists, and ten rebounds in a 102–89 win over the Oklahoma City Thunder. However, on December 1, the NBA credited Porter's tenth rebound to teammate Alperen Şengün, voiding his triple-double. On January 1, 2022, during a 111–124 loss to the Denver Nuggets, Porter and teammate Christian Wood got into a verbal altercation with Rockets assistant coach John Lucas at halftime. Porter then threw an object into the locker room and left the Toyota Center, the arena where the Rockets were playing, before the game ended. On January 3, Rockets head coach Stephen Silas stated that he had suspended both Porter and Wood for one game each for their behavior.
Player profile
Standing at 6 feet and 6 inches (1.98 meters) with a 6 ft. and 9 in. wingspan (2.05 m), Porter plays both the point guard and shooting guard positions. On offense, he possesses a strong isolation game that is complemented by a high level of athleticism that allows him to be an effective scorer on the perimeter; at and above the rim; and in transition. His elite handling skills allow him to create space and defer to either a step-back jumper, a pull-up shot out of a crossover, or a behind-the-back dribble pull-back. Scouts have pointed out his defense and rebounding abilities as another strength, forcing turnovers and running the ball down the court.
At the time of the draft, he was compared to DeShawn Stevenson, Nick Young, JR Smith, James Harden, C.J. Miles, and Kelly Oubre, Jr.
Porter has cited James Harden as one of the biggest influences on his game.
Analysts identified his shot selection, assist-to-turnover ratio, and foul-shooting as a point of improvement in his game in addition to other miscellaneous off-the-court concerns.
Personal life
Weapons charge
On November 15, 2020, Porter Jr. was charged by Mahoning County police following a single car accident for improper handling of a firearm in a vehicle. In a statement the Cleveland Cavaliers stated, "We are aware of the situation involving Kevin Porter Jr. and are in the process of gathering information. We have spoken with Kevin and will continue to address this privately with him as the related process evolves."
A grand jury in Mahoning County declined to indict Porter on the felony gun charge. Misdemeanor charges of driving without a license were also dropped.
Career statistics
NBA
Regular season
|-
| style="text-align:left;"|
| style="text-align:left;"| Cleveland
| 50 || 3 || 23.2 || .442 || .335 || .723 || 3.2 || 2.2 || .9 || .3 || 10.0
|-
| style="text-align:left;"|
| style="text-align:left;"| Houston
| 23 || 20 || 32.2 || .427 || .319 || .768 || 3.9 || 6.4 || .8 || .3 || 16.7
|- class="sortbottom"
| style="text-align:center;" colspan="2"| Career
| 73 || 23 || 26.1 || .435 || .328 || .743 || 3.4 || 3.5 || .9 || .3 || 12.1
College
|-
| style="text-align:left;"| 2018–19
| style="text-align:left;"| USC
|| 21 || 4 || 22.1 || .471 || .412 || .522 || 4.0 || 1.4 || .8 || .5 || 9.5
Player profile
Since the start of the 2018–19 season, Porter was projected as a first-round prospect for the 2019 NBA draft. He looks up to James Harden, a fellow left-handed shooting guard.
References
External links
USC Trojans bio
2000 births
Living people
African-American basketball players
American men's basketball players
Basketball players from Seattle
Cleveland Cavaliers players
Houston Rockets players
Milwaukee Bucks draft picks
Rio Grande Valley Vipers players
Shooting guards
USC Trojans men's basketball players
Small forwards
21st-century African-American sportspeople
20th-century African-American sportspeople
|
55915552
|
https://en.wikipedia.org/wiki/King%20of%20the%20Ring%20%281991%29
|
King of the Ring (1991)
|
The 1991 King of the Ring was the sixth King of the Ring professional wrestling tournament produced by the World Wrestling Federation (WWF, now WWE). The tournament was held on September 7, 1991 at the Providence Civic Center in Providence, Rhode Island as a special non-televised house show. The 1991 tournament was won by Bret Hart. In addition to the tournament, there was only one other match during the night. In this match The Beverly Brothers (Beau Beverly and Blake Beverly) defeated The Bushwhackers (Bushwhacker Butch and Bushwhacker Luke) in a tag team match. A tournament did not occur in 1992 but returned in 1993 as the promotion's annual June pay-per-view.
Production
Background
The King of the Ring tournament is a single-elimination tournament that was established by the World Wrestling Federation (WWF, now WWE) in 1985 with the winner being crowned the "King of the Ring." It was held annually until 1989. The event did not occur in 1990, but returned in 1991. The 1991 tournament was the sixth King of the Ring tournament. It was held on September 7, 1991 at the Providence Civic Center in Providence, Rhode Island and like the previous years, it was a special non-televised house show.
Storylines
The matches resulted from scripted storylines, where wrestlers portrayed heroes, villains, or less distinguishable characters in scripted events that built tension and culminated in a wrestling match or series of matches. Results were predetermined by the WWF's writers.
Aftermath
A tournament was not held in 1992, however, it returned in 1993. Starting with the 1993 King of the Ring, the tournament moved to the annual King of the Ring pay-per-view (PPV) event, held annually until the 2002 King of the Ring; that same year, the WWF was renamed to World Wrestling Entertainment (WWE). Following the 2002 event, the tournament would only be held periodically across episodes of Raw and SmackDown, although the final match of the 2006 tournament took place at the Judgment Day PPV, while the semifinals and finals of the 2015 tournament aired as a WWE Network-exclusive event.
Following his win in the 1993 tournament, Bret Hart became the only two-time King of the Ring winner.
Results
Tournament bracket
1. Pete Doherty substituted for Kerry Von Erich.
2. The Undertaker and Sid Justice were disqualified for attacking the referee. After the bout, Jake Roberts helped The Undertaker put Sid into the Casket.
References
1991
1991 in professional wrestling
1991 in Rhode Island
Events in Rhode Island
Professional wrestling in Providence, Rhode Island
September 1991 events in the United States
|
44276627
|
https://en.wikipedia.org/wiki/Ricochet%20%28software%29
|
Ricochet (software)
|
Ricochet or Ricochet IM is a free software, multi-platform, instant messaging software project originally developed by John Brooks and later adopted as the official instant messaging client project of the Invisible.im group. A goal of the Invisible.im group is to help people maintain privacy by developing a "metadata free" instant messaging client.
History
Originally called Torsion IM, Ricochet was renamed in June 2014. Ricochet is a modern alternative to TorChat, which hasn't been updated in several years, and to Tor Messenger, which is discontinued. On September 17, 2014, it was announced that the Invisible.im group would be working with Brooks on further development of Ricochet in a Wired article by Kim Zetter. Zetter also wrote that Ricochet's future plans included a protocol redesign and file-transfer capabilities. The protocol redesign was implemented in April 2015.
In February 2016, Ricochet's developers made public a security audit that had been sponsored by the Open Technology Fund and carried out by the NCC Group in November 2015. The results of the audit were "reasonably positive". The audit identified "multiple areas of improvement" and one vulnerability that could be used to deanonymize users. According to Brooks, the vulnerability has been fixed in the latest release.
Technology
Ricochet is a decentralized instant messenger, meaning there is no server to connect to and share metadata with. Further, using Tor, Ricochet starts a Tor hidden service locally on a person's computer and can communicate only with other Ricochet users who are also running their own Ricochet-created Tor hidden services. This way, Ricochet communication never leaves the Tor network. A user screen name (example: ) is auto-generated upon first starting Ricochet; the first half of the screen name is the word "ricochet", with the second half being the address of the Tor hidden service. Before two Ricochet users can talk, at least one of them must privately or publicly share their unique screen name in some way.
Privacy benefits
Ricochet does not reveal user IP addresses or physical locations because it uses Tor.
Message content is cryptographically authenticated and private.
There is no need to register anywhere in order to use Ricochet, particularly with a fixed server.
Contact list information is stored locally, and it would be very difficult for passive surveillance techniques to determine whom the user is chatting with.
Ricochet does not save chat history. When the user closes a conversation, the chat log is not recoverable.
The use of Tor hidden services prevents network traffic from ever leaving the Tor network, thereby preserving anonymity and complicating passive network surveillance.
Ricochet is a portable application, users do not need to install any software to use Ricochet. Ricochet connects to the Tor network automatically.
Security warnings
An already-compromised computer system will typically defeat the privacy protections that Ricochet offers, such as a keystroke logging malware.
Even though Ricochet uses Tor, other applications will not be using Tor unless the user has independently set up additional Tor services on their computer.
Active and passive surveillance techniques can still tell if the user is using the Internet, and when, but not necessarily what they are doing on the Internet.
Since a Ricochet user does not register or log in anywhere to use Ricochet, not even with a password, it is important to implement layered physical security, including disk encryption, to protect Ricochet. No encryption is present on inactive data.
Tails Linux users, and other live operating systems users, can optionally backup Ricochet to zero-knowledge cloud services such as SpiderOak, or on a personally owned USB drive (ideally encrypted).
See also
Comparison of instant messaging clients
Ring (software)
Tox (protocol)
References
External links
https://www.ricochetrefresh.net
Free instant messaging clients
Free security software
Software using the BSD license
Tor (anonymity network)
|
24564446
|
https://en.wikipedia.org/wiki/1977%20Rose%20Bowl
|
1977 Rose Bowl
|
The 1977 Rose Bowl was a college football bowl game played on January 1, 1977. It was the 63rd Rose Bowl Game. The USC Trojans, champions of the Pacific-8 Conference, defeated the Michigan Wolverines, champions of the Big Ten Conference,
USC quarterback Vince Evans was named the Rose Bowl Player of the Game, and Trojan freshman tailback Charles White, subbing for Heisman Trophy runner-up Ricky Bell, who was injured in the first quarter, rushed for 114 yards and It was the third consecutive win for the Pac-8 in the Rose Bowl, and the seventh of the
Teams
Michigan
Michigan won their first eight games and spent most of the season ranked first in the polls, until a upset loss to Purdue on November 6. They capped off their Big Ten championship with a shutout of arch rival Ohio State; they were ranked second in both major polls at the end of the regular season.
USC
Under first-year head coach John Robinson, USC was upset in the season opener at home by Missouri, It was the Trojans' fifth-straight regular season loss, dating back to the prior season when John McKay had announced his end-of-season resignation (leaving for the expansion Tampa Bay Buccaneers of the NFL). USC won the rest of their games in 1976, climaxed by a win over #2 UCLA to clinch the conference and a subsequent 17–13 victory over
Scoring
First quarter
No scoring
Second quarter
Michigan - Rob Lytle, 1-yard run (Bob Wood kick blocked)
USC - Vince Evans, 1-yard run (Walker kick)
Third quarter
No scoring
Fourth quarter
USC - Charles White, 7-yard run (Walker kick)
Aftermath
Undefeated Pittsburgh, led by Heisman Trophy winner Tony Dorsett, was the consensus #1 team entering the bowls and played #4 Georgia in the Sugar Bowl in New Orleans. USC and Michigan hoped Georgia would upset Pitt to set up the Rose Bowl as a national championship showdown, but Pitt had a dominant win earlier in the day to keep its top ranking in the USC finished second and Michigan dropped only
References
External links
Summary at Bentley Historical Library, University of Michigan Athletics History
Rose Bowl
Rose Bowl Game
Michigan Wolverines football bowl games
USC Trojans football bowl games
Rose Bowl
January 1977 sports events in the United States
|
45413582
|
https://en.wikipedia.org/wiki/Where%20in%20North%20Dakota%20Is%20Carmen%20Sandiego%3F
|
Where in North Dakota Is Carmen Sandiego?
|
Where in North Dakota Is Carmen Sandiego? is a 1989 edutainment video game. It is the fourth game in the Carmen Sandiego video game series after World (1985), U.S.A. (1986), and Europe (1988). Having observed the popularity of the Carmen Sandiego franchise in the education of school children, educators were inspired to develop a North Dakota version to teach North Dakotans about their state's history and geography.
In contrast to the previous titles which were developed internally by Broderbund, North Dakota was largely developed for the Apple II by a team of fourteen educators led by computer coordinator Craig Nansen, concept designer Bonny Berryman, and co-chairwoman Mary Littler collectively known as the North Dakota Database Committee (NDDC) of the Minot Public Schools, who made the game idea a reality.
This "franchise extension" is the only game in the series based on a U.S. state and was patterned after the previous games in the then-four year old series. Intended as a type of "pilot program" to test whether region-specific versions for the remaining 49 states were financially viable, the game was released in celebration of North Dakota's centennial celebration in 1989. Although 5,000 school copies were sold to schools in the region, the game has become extremely rare and only three retail copies are known to exist. There is disagreement as to whether or not they are complimentary versions offered to educators who worked on the project, or stock left for mail order at a North Dakota game shop. There is currently no proof that retail copies were ever sold in stores.
The game was "save[d] from the memory hole of history" by video game historian Frank Cifaldi and his archivist organization The Video Game History Foundation (VGHF). He believes the game is a great example of history that might have been lost had he not recovered documents for his archival non-for-profit organization.
Gameplay
Where in North Dakota is Carmen Sandiego? is a first-person history and geography-based edutainment game for the Apple II platform. The interface of Where in North Dakota... is similar to the other games in the series, World, U.S.A., and Europe. and is "instantly recognizable as a Carmen Sandiego game". Two design changes were made for this game. The language was softened—"criminals" are called "imposters" and "crimes" are called "pranks"— and a four-wheel drive vehicle is used to travel between locations instead of an airplane. Where in North Dakota ... includes 38 locations within the state, 50 famous people connected to it, 16 pun-named gang members, and over 1,000 factual clues.
Players begin in the office of the NoDak Detective Agency. They type their name into the crime computer using up to 14 characters and are informed about the case. Players are sent to the scene of the crime and tasked with capturing Carmen Sandiego and her cronies by questioning witnesses. Using these clues, players decipher the appearance of the imposters and follow their geographic trail to locations such as the International Peace Garden, Cando, and the Standing Rock Indian Reservation.
Players always have six days' worth of allotted time to track down the crook and create a correct warrant. Every time the player captures three imposters, they are promoted to a higher rank and the game's difficulty increases. More difficult clues provided by witnesses are added to the pool, and players must travel to more locations related to the case; the lowest rank requires the player travel to four different locations, while the hardest level has them travel to 14. They must advance 10 ranks before having an opportunity to catch Carmen herself. When they do catch her, they are placed into the North Dakota Roughrider Detective Hall of Fame, which contains 16 slots.
Solving clues requires research using sources other than the game, which at the time meant almanacs, maps, and biographical dictionaries focused on North Dakota. In doing so, players learn facts about the geography, environment, economy, and history of the state, as well as techniques for conducting research, using databases, and deductive reasoning. The teacher's guide also suggests the game can be used to teach students skills in: using maps, thinking, studying, comprehension, vocabulary, writing, and computer literacy. The teacher's guide notes that while skill is an important factor, luck is also very important. The elements of each case are randomly generated which means repeats of the same case can have vastly different results. It is also possible, albeit highly unlikely, for a game not to provide enough character clues for the suspect to be identified, so that even the most conscientious players may occasionally be unsuccessful.
Screens
The Main Playing Screen contains the location name, day/time, location description, and four other options that help the player progress. Notes are written in the Notebook, while warrants are issued in the Crime Lab. Choosing "Investigate" allows the player to discover Character Clues and Location Clues, while "Go To Gas Station" allows the player to travel to the next location.
Materials
Unlike previous Carmen Sandiego games, rather than including an almanac or reference work, the developers opted to use an online database to provide the clues. The North Dakotan educators wanted to include computerized materials in the game to allow their teachers to use the software as an instructional tool; this led to them "chang[ing] the Carmen Sandiego program" and adding 16 different databases to the title with topics like parks and minerals.
In the school version, the game's packaging consisted of a full lesson plan: a binder with a manual, a North Dakota state almanac, and the game on a double-side floppy disk. The binder included other information such as head shots of Carmen's henchmen, a map of North Dakota, and a page that asks the player to describe the game's final scene and mail it in to receive a prize. Other pages have a print version of the almanac and information about the cities in the game. A teacher's guide is also included. A second binder contains activities that correlate to the 18 database disks included in the package. A North Dakota centennial blue book and a booklet entitled Governors and First Ladies of North Dakota were also included in this binder. The retail version of the game was cased in a game box stylized like the earlier games.
History
Setting the precedent
In the late 80's, there was a significant issue with geography-based educational software: nationwide programs were often too general to be useful in teaching students about their own state, while software companies were unwilling to make 50 versions of their games. In addition, the burden of making a state-centered program was too much for an individual educator. The solution was special collaborations between software makers and states, with the result being new state-related products for students. This arrangement worked for all parties: educators and students received a useful teaching/learning aid, the state education department got educational software "tailor made to their requirements", and the software company had a guaranteed market and dealt with only one customer thereby being able to cut costs such as marketing considerably. Before Where in North Dakota..., this formula had been tested when North Dakota's Department of Public Instruction collaborated with Didatech over two years to create a state-specific version of Crosscountry USA, titled Crosscountry North Dakota. The game turned out to be cost-effective; the state invested $45,000 and supplied information for the software maker to use, while Didatech produced the game and manual. Didatech sold a school version of the software to the state, which then sold it directly to schools; meanwhile Didatech sold a separate retail version through its traditional retail channels. The cost per school was only $65, verses $350 for the site license for a national title. Didatech president Paul Melhus noted that this type of collaboration was better suited to smaller states because they were less bureaucratic, more flexible, and more open to innovation.
Conception
Having observed the popularity of the Carmen Sandiego franchise in the education of school children, educators were inspired to develop a North Dakota version to teach North Dakotans about their state's history and geography. In early 1987, the Minot Public Schools system was looking for "an interesting way to teach students and educators the basics of using a database". After observing Where in the U.S.A.'s ability to hold her child's attention for hours, Bonny Berryman, an eighth grade social studies teacher at Erik Ramstad Junior High, came up with the idea of a special Carmen Sandiego program that would coincide with the state's centennial year. She felt the game could teach children how to "retrieve information from computers, rather than memorize it", an important skill given the abundance of available information in the computer age. She also knew that the franchise had achieved "great acceptance throughout [the] district and state" and believed the game could appeal to adults who would find it fascinating and informative. She deemed the Carmen Sandiego games a "novelty", allowing students to have fun while learning; she also liked the opportunity for randomness, and the graphics, color, movement, and sound that other media such as board games did not provide. She noted that the target market played video games every weekend and this was a franchise with which they were already familiar. She pitched the idea to Minot Public School System computer coordinator Craig Nansen, who was initially skeptical, describing it as a "pipe dream or pie-in-the-sky idea". However, he saw promise in the idea of stimulating research by encouraging students to use an encyclopedia or dictionary to decipher clues about their state, thereby adding a state-based component to the database project. Additionally, his goal was to represent "'cool' software [that] attract[s] the attention of board members, superintendents, etc. [while being] educationally sound". He subsequently contacted series developer Broderbund about the possibility of creating the game, a prospect they liked. At the time, the recent success of Carmen Sandiego Days had resulted in schools from many states asking Broderbund to make state-specific versions of their games to fit into their Carmen Days.
Development
At the time, Broderbund CEO Doug Carston preferred to describe the series as "explorational" rather that "educational". In his opinion the term "educational... translate[d] into 'boring' in kidspeak". But when Nansen approached the company, he argued that while the series "wasn't meant to be an educational tool", it greatly appealed to educators and Where in North Dakota... should be developed for this purpose. Nansen recalled that "things fell into place and Broderbund was willing to do it", overcoming his previous skepticism. Subsequently, he contacted North Dakota's Department of Public Instruction. In March 1987 he was able to secure a $100,000 grant from the state legislature which also liked the idea, and appropriated the money to help fund the project. The game interested North Dakota's Department of Public Instruction because of its narrow scope compared with the previous Carmen Sandiego titles and its ability to teach computer skills. North Dakota's Department of Public Instruction employee Chris Eriksmoen would later speak highly of the collaboration which he compared to that of Crosscountry North Dakota. By March 1988, Broderbund had not spoken openly about the project, but Classroom Computer Learning had been informed that there was to be an "imminent" contract between Broderbund and the North Dakota Database Committee (NDDC), and that the project would be available to North Dakota's educators within six to nine months. The inter-business deal would give Broderbund rights to sell the retail version while North Dakota's Department of Public Instruction would sell the school version.
The Broderbund team agreed to publish the game but required local expertise to create the clues and write the text. As a result, Nansen created the North Dakota Database Committee with teachers who had taught North Dakotan topics in the past. It spent the next two years compiling facts with the help of school districts across the state. The "educators-turned-game-developers" came up with the pranks, selected locations, researched clues, wrote informational text such as the teacher's manual, created Carmen's imposters, and sourced graphic material. Afterwards, Broderbund's designers took the team's work, and programmed and tested the game using the interface and structure of its previous Apple II Carmen Sandiego titles. They also developed graphics, a user manual, and packaging for the retail version. While there were restrictions on how much the North Dakotan team could deviate from the Carmen Sandiego template, local nuances were added including using four-wheel drive vehicles for travel and changing the words "criminals" to "imposters" and "crimes" to "pranks" to add "a touch of North Dakota nice". The previous games in the series (World and USA) had been released with almanacs to help the player solve the game's riddles and clues; the state of North Dakota did not have an almanac at the time, so the educators wrote one.
The project was completed in 1989 for the state's Department of Education to help mark North Dakota's 100th year of statehood. Ultimately, the entire grant was used for the project: $65,000 of the $100,000 was set aside by North Dakota's Department of Public Instruction to purchase an initial order of 2500 copies of the game from Broderbund; the balance of the money was used to pay the North Dakota Database Committee and for advertising and distribution costs. North Dakota's Department of Public Instruction aimed to recoup their costs by selling copies to North Dakotan schools while Broderbund planned to make a commercial version available.
Release
A January 1988 edition of It's Elementary revealed that Where in North Dakota is Carmen Sandiego? was to be distributed statewide in the coming summer in preparation for North Dakota's centennial celebration in 1989. However, the game was delayed until February 23, 1989, when an official news release was issued by Broderbund explaining that the game was being made available to both schools and the public. Before its official release, a copy of a field-testing version was accidentally leaked, and educators who mistakenly believed it was the finished product began calling and complaining.
The first day the game had retail distribution was March 18, 1989, in the Dakota Square Mall, where members of the NDDC demonstrated the software. At the time of its release, Broderbund was the third largest developer of commercial computer software in the United States, and its Carmen Sandiego software was especially popular both in schools and with the general public. World was used throughout Minot's 6th grade classes while U.S.A. was used in their 5th grade classes; the games were both popular in Fargo schools. Where in North Dakota is Carmen Sandiego? was targeted at fourth graders as the team believed this was the year when students began learning about their state. However, the teacher's guide suggested the game was suitable for students in grades four to twelve, for use either by an individual, a small group, or the entire class. Schools were encouraged to allow students to play the game before and after school, or when they completed their school work early. The game was not intended to be a replacement for the North Dakotan curriculum. Nansen saw it as an "enrichment activity" and a "motivating instructional tool" instead, merely one of many ways to get students interested in North Dakota. To promote the game, Nansen conducted seminars in schools across the state, encouraging teachers to incorporate the game, as well as other North Dakotan database games, into their curricula.
North Dakota public school districts interested in the program were encouraged to call social studies director of the North Dakota Department of Public Instruction, Curt Eriksmoen, and before using it had to send at least one educator to one of Nansen's seminars. North Dakota's Department of Public Instruction would ship copies to qualifying schools across the state. Meanwhile, the public could order the game from Broderbund Software-Direct at a recommended retail price of $34.95, while educators could order teacher's guides for $10. There is no proof the game was ever officially sold through retail stores (although one complimentary retail copy was sent to the NDDC for their work on the project). However, the game was never officially published; only 10 prototype retail copies were ever produced, which were all mailed to educators involved in the project.
Other states approached Broderbund after the North Dakota title was released and were quoted millions instead of the $100,000 that funded the North Dakota game; none of these other projects came to fruition. According to Nansen, the biggest issue with these state-based projects was not the actual cost of producing them, but that Broderbund was forced to take their production team away from working on much more lucrative projects. By 2001 around 5,000 copies had been sold in North Dakota itself. The Video Game History Foundation described the program as "a hit in North Dakotan classrooms, but a flop for Broderbund".
Reception
Nansen expected the game to be in every North Dakota school that had a computer system, and for it to be as popular throughout the state's education system as the World and U.S.A. versions. Berrryman also saw the game's potential popularity outside North Dakota schools, commenting on its appeal to adults due to its agricultural, immigration, historical, and geographical content.
Where in North Dakota is Carmen Sandiego? ended up selling approximately 5,000 copies; mostly to North Dakota schools and was very popular within the state. With a total of 517 schools in the state (as of 2013), this equates to roughly 10 copies sold to each school. The game was used in the Grand Forks GATE program as well as in other classrooms, generally by fifth and sixth graders. However, very few copies made it outside of North Dakota; in these cases they were generally people with connections to North Dakota such as grandchildren of North Dakotan residents.
Contemporary reviews
InCider was puzzled by the specificity of the game. When listing the locations of various Carmen Sandiego games, it added that the imposter was also "in North Dakota, of all places". The Minot Daily News described the game as "special software". The Grand Forks Herald expressed surprise that there was "even" a North Dakotan game, though the writer commented that it succeeded in teaching geography in a palatable way and in developing research skills. Joseph P. Karwoski of Computerist! said the NDDC did a "great job" creating a "fantastic learning tool", and hoped other states followed suit. The game was worthy enough for a softkey to be published in Computerist!.
Modern reviews
California-based historian Frank Cifaldi described it as "probably the hardest" Carmen Sandiego game because it had clues based on obscure North Dakotan historical trivia, which are sometimes impossible to solve via an Internet search engine, and later described it as "very challenging". However, he felt the game was a "big hit, in terms of a fun story to tell". Kris Kerzman of Inforum deemed it "a fascinating piece of [North Dakota]'s history and video game history", and noted its existence could be puzzling even to fans of the Carmen Sandiego franchise. Cool987FM warned its readers of the difficulty in locating original floppy disks of the game. The Gamecola podcast described Where in North Dakota... as a "weird old PC game" that could be dug up for family game night. Atlas Obscura deemed the title as "little more than a barely remembered oddity" and akin to a "TV pilot that never went to series".
Aftermath
North Dakota was the first state to adapt Carmen Sandiego into a state-specific video game. According to Education Technology News, the title was "picked up by several states and adapted to their needs". Though the game was heavily circulated in North Dakota school classrooms in the 1990s, it has become difficult to find in modern times. As North Dakota schools updated their computers, floppy disks became obsolete. This, coupled with the small production run, led to the title becoming rare. Cifaldi referred to it as "one of the rarest video games ever made", and openly encouraged citizens to unearth their copies of the game. A few school versions have survived; two are located at the North Dakota State Library while a copy was acquired by TanRu Nomad for his YouTube review. Nansen received an unprotected version from Broderbund after sales died off. By 2015 he and a group of students had digitized the game for play on Javascript emulators.
Frank Cifaldi is a digital archivist, who became fascinated by the game after being shocked by its existence when browsing Wikipedia entries on Carmen Sandiego titles. After conversing with game designer and collector Mike Mika, he discovered that although Mika apparently had every Carmen Sandiego game, he did not have this entry; Mika said its existence was "blowing his mind" since the project sounded like a joke and made no sense. Cifaldi also discovered that Where in North Dakota... was the only Carmen Sandiego game not to be represented in the National Museum of Play's Broderbund Software Collection. In January 2015, Cifaldi began conversing with Nansen after posting on Twitter requesting information about the game. As Nansen was recently retired, he offered to round up all the information on the obscure game and make it available; meanwhile Cifaldi offered to write articles using the material.
In anticipation of the game's upcoming digital archive, Cifaldi visited Nansen in Minot with a film crew from June 13–15, 2016. He interviewed those who worked on the project and recorded various locations used in the game. Valuable material such as photographs and internal documentation was also recovered (such as the authentic Carmen's North Dakota Almanac). Handwritten notes from the NDDC, messages between the developers and Broderbund, classroom worksheet extensions, imagery of development, various builds of the game during its evolution, and the manual were all recovered. During this visit, Nansen gave Cifaldi one of only three known surviving versions of the game boxed for retail sale. This version differed from the version that was sold to schools and was sold only through the Broderbund mail-order catalog. This meeting was filmed for S1E6 of the Redbull TV series Screenland, entitled "Eight-Bit Archaeologists".
The game was imaged and made available online, providing Cifaldi with a raw rip of the unused version. Players need to use an Apple II emulator or write onto old floppy disks and play it on an Apple II. Cifaldi's copy was later sent to the National Museum of Play to supplement other Carmen Sandiego materials donated to the museum by Broderbund founder Doug Carlston in 2014. Jon-Paul Dyson, the director of the International Center for the History of Electronic Games at the National Museum of Play, personally thanked Cifaldi for his endeavours. On July 23, 2016, the Apple II-focused KansasFest featured a Where in North Dakota is Carmen Sandiego? contest.
Cifaldi has since added his work to his new initiative Video Game History Foundation. It aims to preserve the video game industry's lost assets, such as Penn and Teller's Smoke and Mirrors. To help celebrate the launch of the foundation, IGN partnered to show off some of their recovered games, including Where in North Dakota... via a live stream. A playable version of the game was featured at the debut Retro Play area at the three-day GDC 2017 expo (along with Penn & Teller's Smoke and Mirrors, Bound High!, Sound Fantasy, and Alter Ego). The Video Game History Foundation, which was hosting the exhibit, decided to include the clue guide almanac, which had been successfully preserved despite years in a fourth-grade classroom, for the public to use. Ultimately four participants attempted the game and none were successful in catching Carmen. In April 2018, an original staged reading with the game's namesake was performed at AwesomeCon in Washington, DC.
References
External links
The Bismarck Tribune article (paywall)
1989 video games
Carmen Sandiego games
Apple II games
Geography of North Dakota
North America-exclusive video games
North Dakota culture
Educational video games
Cancelled DOS games
Video games developed in the United States
Video games set in North Dakota
|
1957440
|
https://en.wikipedia.org/wiki/FAAC
|
FAAC
|
FAAC or Freeware Advanced Audio Coder is a software project which includes the AAC encoder FAAC and decoder FAAD2. It supports MPEG-2 AAC as well as MPEG-4 AAC. It supports several MPEG-4 Audio object types (LC, Main, LTP for encoding and SBR, PS, ER, LD for decoding), file formats (ADTS AAC, raw AAC, MP4), multichannel and gapless encoding/decoding and MP4 metadata tags. The encoder and decoder is compatible with standard-compliant audio applications using one or more of these object types and facilities. It also supports Digital Radio Mondiale.
FAAC and FAAD2, being distributed in C source code form, can be compiled on various platforms and are distributed free of charge. FAAD2 is free software. FAAC contains some code which is published as Free Software, but as a whole it is only distributed under a proprietary license.
FAAC was originally written by Menno Bakker.
FAAC encoder
FAAC stands for Freeware Advanced Audio Coder. The FAAC encoder is an audio compression computer program that creates AAC (MPEG-2 AAC/MPEG-4 AAC) sound files from other formats (usually, CD-DA audio files). It contains a library (libfaac) that can be used by other programs. AAC files are commonly used in computer programs and portable music players, being Apple Inc.'s recommended format for the company's iPod music player.
Some of the features that FAAC has are: cross-platform support, "reasonably" fast encoding, support for more than one "object type" of the AAC format, multi-channel encoding, and support for Digital Radio Mondiale streams. It also supports multi-channel streams, like 5.1. The MPEG-4 object types of the AAC format supported by FAAC are the "Low Complexity" (LC), "Main", and "Long Term Prediction" (LTP). The MPEG-2 AAC profiles supported by FAAC are LC and Main. The SBR and PS object types are not supported, so the HE-AAC and HE-AACv2 profiles are also not supported. The object type "Low Complexity" is the default and also happens to be used in videos meant to be playable for portable players (like Apple's iPod) and used by video-hosting sites (like YouTube).
FAAC has been evaluated as a somewhat "lower quality" option than other aac encoders.
Alternatives for AAC encoding in Unix-like operating systems
FAAC is one of six alternatives that Linux/Unix users have for creating AAC files. The others are:
The Fraunhofer-developed "FDK AAC" encoder library included as part of Android. The FDK AAC source code is licensed under a custom-copyleft license, and has been ported to other platforms as libfdk-aac. The library is built around fixed-point math and supports only 16-bit PCM input.
The Nero AG-developed "Nero AAC Codec", which has a proprietary license, and is not available for the entire range of hardware architectures that these operating systems are able to run. Nero no longer develops this encoder, but the package is still available, and it remains a high-quality option for AAC encoding.
The libavcodec native AAC encoder (separate versions maintained by FFmpeg and Libav) was experimental but considered "better than vo-aacenc" in at least some tests. It was written by Konstantin Shishkov, and released under version 2.1 of the LGPL. The AAC encoder used in FFmpeg's version of libavcodec was significantly improved for version 3.0 of FFmpeg and is no longer considered experimental. Libav has not merged this work.
libvo_aacenc, the Android VisualOn AAC encoder. This encoder was replaced in Android by the FDK AAC encoder mentioned above, and is considered a poor-quality option.
The (nonfree) libaacplus which implements the High-Efficiency Advanced Audio Coding.
Mac OS X users can utilize Apple's AAC encoder with the command-line afconvert tool.
FAAD2 decoder
FAAD2 is Freeware Advanced Audio (AAC) Decoder including SBR decoding. It is MPEG-2 and MPEG-4 AAC decoder and supports MPEG-4 audio object types LC, Main, LTP, LD, ER, SBR and PS, which can be combined also to HE-AAC and HE-AACv2 Profile (AAC LC+SBR+PS). It contains a library (libfaad) that can be used by other programs.
FAAD and FAAD2 were originally written by Menno Bakker from Nero AG. FAAD2 is the successor to FAAD1, which was deprecated.
FAAD is Freeware Advanced Audio Decoder. It was first released in 2000 and it did not support SBR and PS audio object types. The last version of FAAD1 was 2002-01-04. All development later focused in FAAD2. The SBR decoding support (HE-AAC) was added in the version release on 25 July 2003. FAAD2 version 2.0 was released on 6 February 2004.
Licensing
FAAC contains code based on the ISO MPEG-4 reference code, whose license is not compatible with the LGPL license. Only the FAAC changes to this ISO MPEG-4 reference code are licensed under the LGPL license. The ISO MPEG-4 reference software was published as ISO/IEC 14496-5 (MPEG-4 Part 5: Reference software) and it is freely available for download from ISO website. ISO/IEC gives users of the MPEG-2 NBC/MPEG-4 Audio standards free license to this software module or modifications thereof for use in hardware or software products claiming conformance to the MPEG-2 NBC/MPEG-4 Audio standards. Those intending to use this software module in hardware or software products are advised that this use may infringe existing patents.
FAAD2 is licensed under the GPL v2 (and later GPL versions). Code from FAAD2 is copyright of Nero AG (the "appropriate copyright message" mentioned in section 2c of the GPLv2). The source code contains a note that the use of this software may require the payment of patent royalties. Commercial non-GPL licensing of this software is also possible.
FAAD (FAAD1) modifications to the ISO MPEG-4 AAC reference code were distributed under the GPL.
Other software
FAAC and FAAD2 are used in the following software products and libraries:
Avidemux video editing software.
CDex uses FAAC encoder.
FFmpeg supports AAC encoding through external library libfaac, and using its experimental native encoder.
fre:ac uses FAAC and FAAD2 for AAC support.
GStreamer multimedia framework uses FAAC and FAAD.
MPlayer uses FAAD2.
VLC media player uses the FAAC (encoder) and FAAD (decoder) to provide support for AAC audio.
Music Player Daemon uses FAAD2
Music on Console uses FAAD2
There is also other software that uses FAAC libraries.
See also
List of codecs
List of open source codecs
Lossy data compression
LAME
TooLame
References
Audio codecs
Cross-platform software
|
48726034
|
https://en.wikipedia.org/wiki/Dorkbot%20%28malware%29
|
Dorkbot (malware)
|
Dorkbot is a family of malware worms that spreads through instant messaging, USB drives, websites or social media channels like Facebook. It originated in 2015 and infected systems were variously used to send spam, participate in DDoS attacks, or harvest users' credentials.
Functionality
Dorkbot’s backdoor functionality allows a remote attacker to exploit infected systems. According to an analysis by Microsoft and Check Point Research, a remote attacker may be able to:
Download and run a file from a specified URL;
Collect login information and passwords through form grabbing, FTP, POP3, or Internet Explorer and Firefox cached login details; or
Block or redirect certain domains and websites (e.g., security sites).
Impact
A system infected with Dorkbot may be used to send spam, participate in DDoS attacks, or harvest users' credentials for online services, including banking services.
Prevalence
Between May and December 2015, the Microsoft Malware Protection Center detected Dorkbot on an average of 100,000 infected machines each month.
History
On December 7th, 2015 the FBI and Microsoft in a joint task force took down the Dorkbot Botnet.
Remediation
In 2015, the U.S. Department of Homeland Security advised the following action to remediate Dorkbot infections:
Use and maintain anti-virus software
Change your passwords
Keep your operating system and application software up-to-date
Use anti-malware tools
Disable AutoRun
See also
Alert (TA15-337A)
Code Shikara (Computer worm)
Computer worm
HackTool.Win32.HackAV
Malware
US-CERT
References
Botnets
Exploit-based worms
|
65034494
|
https://en.wikipedia.org/wiki/2020%20Troy%20Trojans%20baseball%20team
|
2020 Troy Trojans baseball team
|
The 2020 Troy Trojans baseball team represented Troy University in the 2020 NCAA Division I baseball season. The Trojans played their home games at Riddle–Pace Field and were led by fifth year head coach Mark Smartt.
On March 12, the Sun Belt Conference announced the indefinite suspension of all spring athletics, including baseball, due to the increasing risk of the COVID-19 pandemic.
Preseason
Signing Day Recruits
Sun Belt Conference Coaches Poll
The Sun Belt Conference Coaches Poll was released sometime on January 30, 2020 and the Trojans were picked to finish fourth in the East Division.
Preseason All-Sun Belt Team & Honors
Drake Nightengale (USA, Sr, Pitcher)
Zach McCambley (CCU, Jr, Pitcher)
Levi Thomas (TROY, Jr, Pitcher)
Andrew Papp (APP, Sr, Pitcher)
Jack Jumper (ARST, Sr, Pitcher)
Kale Emshoff (LR, RS-Jr, Catcher)
Kaleb DeLatorre (USA, Sr, First Base)
Luke Drumheller (APP, So, Second Base)
Hayden Cantrelle (LA, Jr, Shortstop)
Garrett Scott (LR, RS-Sr, Third Base)
Mason McWhorter (GASO, Sr, Outfielder)
Ethan Wilson (USA, So, Outfielder)
Rigsby Mosley (TROY, Jr, Outfielder)
Will Hollis (TXST, Sr, Designated Hitter)
Andrew Beesley (ULM, Sr, Utility)
Personnel
Roster
Coaching staff
Schedule and results
Schedule Source:
*Rankings are based on the team's current ranking in the D1Baseball poll.
References
Troy
Troy Trojans baseball seasons
Troy Trojans baseball
|
45136935
|
https://en.wikipedia.org/wiki/Munirathna%20Anandakrishnan
|
Munirathna Anandakrishnan
|
Munirathna Anandakrishnan (12 July 1928 – 29 May 2021) was an Indian civil engineer, educationist, a chairman of the Indian Institute of Technology, Kanpur and a Vice-Chancellor of Anna University. He was also an Advisor to the Government of Tamil Nadu on Information Technology and e-Governance. A winner of the National Order of Scientific Merit (Brazil), he was honored by the Government of India, in 2002, with Padma Shri, the fourth highest Indian civilian award.
Biography
Munirathna Anandakrishnan was born on 12 July 1928 in the south Indian state of Tamil Nadu. After graduating in civil engineering (BE) from the College of Engineering, Guindy, Madras University in 1952, he pursued his studies at the University of Minnesota from where he secured a Master's degree (MS) in 1957 and a PhD in civil engineering in 1960. During his doctoral studies, he was a teaching assistant at the university and was the president of the Indian Students Association and Foreign Students Council at the university. He also worked part-time at Twin City Testing and Engineering Laboratories, a private firm, as a materials engineer
Anandakrishnan returned to India in 1962 and started his Indian career as a Grade I Senior Scientific Officer at the Central Road Research Institute, Delhi and worked there for a year. His next posting was as a member of the faculty of civil engineering at the Indian Institute of Technology, Kanpur (IIT Kanpur) where he worked till 1974, holding various positions such as Assistant Professor, Professor, Senior Professor, Chairman of Civil Engineering Department, Dean and Acting Director. He also served IIT Kanpur as the chairman of the central staff recruiting committee and as the advisor on campus development.
In 1974, Anandakrishnan moved to USA, on deputation from the Department of Science and Technology to work as the Science Counsellor at The Indian Embassy in Washington DC. In 1978, he joined the United Nations Commission on Science and Technology for Development (CSTD) as the Chief of New Technologies at the Office of Science and Technology (OST), where he worked till his retirement from UN service in 1989. At the United Nations, he also held the posts of the Deputy Director at the Commission on Science and Technology for Development (CSTD) and the secretary of the UN Advisory Committee on Science and Technology for Development (UNACAST).
In 1990, Anandakrishnan returned to India to take up the position as the vice chancellor of Anna University, Tamil Nadu and served the institution for two consecutive terms till 1996. During this period, he was also a member of an International Expert Committee for the development of Science and Technology in Brazil and was involved in its activities till 1997. After his second tenure as the vice chancellor, he was appointed the vice chairman of the Tamil Nadu State Council for Higher Education (TANSCHE) and also held the post of the Advisor to the Chief Minister of Tamil Nadu on matters related to Information Technology and E-Governance. In his advisory role, he was responsible for replacing the Common Entrance Test system with the Single Window Admission System for admission to engineering courses across Tamil Nadu. Anandakrishnan retired from active service in 2001 and lived with his family at Kasturibai Nagar, in Adyar, Chennai.
On 29 May 2021, he died due to COVID-19.
Positions
Post retirement, Anandakrishnan is known to have been active by involving himself with many institutions and organizations. He was the honorary chairman of the Board of Governors of the Indian Institute of Technology, Kanpur and held the chair of the Higher Education Committee of the Federation of Indian Chamber of Commerce and Industries (FICCI). He was a member of the executive councils of the University of Kerala, Central University of Haryana, Sikkim University and the National University of Educational Planning and Administration (NUEPA). He was a former chairman of Science City, Tamil Nadu, Madras Institute of Development Studies (MIDS), the High-Power Committee for the Review and Reorientation of the Undergraduate Engineering Education in India and the Board of Undergraduate Studies of the All India Council for Technical Education (AICTE), New Delhi.
Anandakrishnan was the president of the Madras Science Association and Tamil Nadu Academy of Sciences and a member of the Indian Society for Technical Education and the Indian Society for Theoretical and Applied Mechanics. He was also associated as a member with organizations such as Madras School of Economics, A. M. M. Murugappa Chettiar Research Centre, C. P. R. Environmental Education Centre, Tamil Virtual University, Assam University, Tamil Nadu Foundation, Citizen Consumer and Civic Action Group (CAG), Madras Management Association, Madras Craft Foundation,
Tamil Nadu Council for Sustainable Livelihood, MS Swaminathan Research Foundation, and International Forum for Information Technology in Tamil, Singapore, (INFITT). He was a member of the Managing Committee of the Tamil Nadu chapter of the Transparency International, a trustee of the Information Technology Bar of India, Chennai and held the chair of the Academic Advisory Committee of the National Assessment and Accreditation Council (NAAC), Bangalore.
Anandakrishnan was a former chairman of several University Grants Commission committees and panel such as Engineering and Technology panel, Committee on Specification of Degrees, Expert Committee to review the Maintenance Grant Norms for Delhi Colleges and the Expert Committee to examine the proposals for starting new Academic Staff Colleges. He has also headed the AICTE committees like Sectoral Committee of the National Board of Accreditation, Southern Regional Committee, Standing Committee on Entry and Operation of Foreign Universities in India and All India Board of Under Graduate Studies in Engineering and Technology. He has also been associated as a member with the academic advisory council of Pondicherry University and with the National Assessment and Accreditation Council (NAAC), Bangalore.
Publications
Anandakrishnan is the author of a book and the editor of three more on educational and technical aspects of engineering.
Science, Technology and Society
Engineering Graphics
Planning and Popularizing Science and Technology in Developing Countries
Trends and Prospects in Planning and Management of Science and Technology for Development
He is also credited with over 100 articles in peer reviewed national and international journals.
Awards and recognitions
Munirathna Anandakrishan won the Order of the Ski-Uh-Mah from the University of Minnesota in 1958 for his activities during his studies at the institution. In 1972, he received the Indian Invention Promotion Award for developing the design of a radial permeability measuring device. The Institution of Engineers (India) selected him for the Engineering Personality Award in 1992 for his contribution in liaising with UN agencies. The next year, he received two awards, the TNF Excellence Award from the Tamil Nadu Foundation and the M. K. Nambiar Memorial Award from the Madras Institute of Magnetobiology. A year later, Rotary International, Meenambakkam awarded him the Rotary Vocational Service Award. Rotary Club of Madras followed it with the For the Sake of Honour Award the next year. He received one more award, the National Science and Technology Award for Excellence in 1995.
The Government of Brazil conferred on him the Commander of the National Order of Scientific Merit (Brazil) in 1996 and the same year, he received the Ugadi Puraskar from the Madras Telugu Academy. The International Institute of Tamil Studies honoured him in 1999 and the Centenarian Trust, Chennai selected him as the Man of the Year 1999. The Government of India awarded him the civilian honour of Padma Shri in 2002 and the University of Minnesota awarded him the Distinguished Leader Award in 2003. The year 2004 brought him two awards, the Platinum Jubilee Award of the Indian Ceramics Society and the ICCES Outstanding Achievement Award from the International Conference on Computational and Experimental Engineering and Sciences.
Anandakrishnan was an elected Fellow of the National Academy of Sciences, India and the Institution of Engineers (India). He was also a Fellow of the Indian Society of Technical Education. Kanpur University honoured Anandakrishnan with the title of Doctor of Science (Honoris Causa) in 2005. The University Grants Commission (India) awarded him the UGC National Swami Pranavananda Saraswati Award in 2006.
Publications
See also
Commission on Science and Technology for Development
Indian Institute of Technology, Kanpur
Madras Institute of Development Studies
University of Minnesota
References
1928 births
2021 deaths
20th-century Indian educational theorists
20th-century Indian engineers
Deaths from the COVID-19 pandemic in India
Engineers from Tamil Nadu
Fellows of The National Academy of Sciences, India
IIT Delhi faculty
Indian civil engineers
Indian expatriates in the United States
IIT Kanpur faculty
IIT Kanpur people
Indian technology writers
People from Vellore district
Recipients of the Padma Shri in literature & education
Scholars from Tamil Nadu
|
53920216
|
https://en.wikipedia.org/wiki/Trojan.WinLNK.Agent
|
Trojan.WinLNK.Agent
|
A Trojan.WinLNK.Agent (also Trojan:Win32/Startpage.OS) is the definition from Kaspersky Labs of a Trojan downloader, Trojan dropper, or Trojan spy.
Its first known detection goes back to May 31, 2011, according to Microsoft Malware Protection Center. This Trojanware opens up an Internet Explorer browser to a predefined page (like to i.163vv.com/?96). Trojan Files with the LNK extension (expression) is a Windows shortcut to a malicious file, program, or folder. A LNK file of this family launches a malicious executable or may be dropped by other malware. These files are mostly used by worms to spread via USB drives (i.e.).
Other aliases
Win32/StartPage.NZQ (ESET)
Trojan.WinLNK.Startpage (Kaspersky Labs)
Trojan:Win32/Startpage.OS (Microsoft)
Other Variants
Trojan.WinLNK.Agent.ae
Trojan.WinLNK.Agent.ew
Statistics
In 2016, India had the most incidents relating to this Trojan with 18,36 % worldwide.
External links
Analysis of a file at VirusTotal
References
2011 in computing
Computer worms
Windows trojans
|
7955447
|
https://en.wikipedia.org/wiki/Computerized%20classification%20test
|
Computerized classification test
|
A computerized classification test (CCT) refers to, as its name would suggest, a test that is administered by computer for the purpose of classifying examinees. The most common CCT is a mastery test where the test classifies examinees as "Pass" or "Fail," but the term also includes tests that classify examinees into more than two categories. While the term may generally be considered to refer to all computer-administered tests for classification, it is usually used to refer to tests that are interactively administered or of variable-length, similar to computerized adaptive testing (CAT). Like CAT, variable-length CCTs can accomplish the goal of the test (accurate classification) with a fraction of the number of items used in a conventional fixed-form test.
A CCT requires several components:
An item bank calibrated with a psychometric model selected by the test designer
A starting point
An item selection algorithm
A termination criterion and scoring procedure
The starting point is not a topic of contention; research on CCT primarily investigates the application of different methods for the other three components. Note: The termination criterion and scoring procedure are separate in CAT, but the same in CCT because the test is terminated when a classification is made. Therefore, there are five components that must be specified to design a CAT.
An introduction to CCT is found in Thompson (2007) and a book by Parshall, Spray, Kalohn and Davey (2006). A bibliography of published CCT research is found below.
How it works
A CCT is very similar to a CAT. Items are administered one at a time to an examinee. After the examinee responds to the item, the computer scores it and determines if the examinee is able to be classified yet. If they are, the test is terminated and the examinee is classified. If not, another item is administered. This process repeats until the examinee is classified or another ending point is satisfied (all items in the bank have been administered, or a maximum test length is reached).
Psychometric model
Two approaches are available for the psychometric model of a CCT: classical test theory (CTT) and item response theory (IRT). Classical test theory assumes a state model because it is applied by determining item parameters for a sample of examinees determined to be in each category. For instance, several hundred "masters" and several hundred "nonmasters" might be sampled to determine the difficulty and discrimination for each, but doing so requires that you be able to easily identify a distinct set of people that are in each group. IRT, on the other hand, assumes a trait model; the knowledge or ability measured by the test is a continuum. The classification groups will need to be more or less arbitrarily defined along the continuum, such as the use of a cutscore to demarcate masters and nonmasters, but the specification of item parameters assumes a trait model.
There are advantages and disadvantages to each. CTT offers greater conceptual simplicity. More importantly, CTT requires fewer examinees in the sample for calibration of item parameters to be used eventually in the design of the CCT, making it useful for smaller testing programs. See Frick (1992) for a description of a CTT-based CCT. Most CCTs, however, utilize IRT. IRT offers greater specificity, but the most important reason may be that the design of a CCT (and a CAT) is expensive, and is therefore more likely done by a large testing program with extensive resources. Such a program would likely use IRT.
Starting point
A CCT must have a specified starting point to enable certain algorithms. If the sequential probability ratio test is used as the termination criterion, it implicitly assumes a starting ratio of 1.0 (equal probability of the examinee being a master or nonmaster). If the termination criterion is a confidence interval approach, a specified starting point on theta must be specified. Usually, this is 0.0, the center of the distribution, but it could also be randomly drawn from a certain distribution if the parameters of the examinee distribution are known. Also, previous information regarding an individual examinee, such as their score the last time they took the test (if re-taking) may be used.
Item selection
In a CCT, items are selected for administration throughout the test, unlike the traditional method of administering a fixed set of items to all examinees. While this is usually done by individual item, it can also be done in groups of items known as testlets (Leucht & Nungester, 1996; Vos & Glas, 2000).
Methods of item selection fall into two categories: cutscore-based and estimate-based. Cutscore-based methods (also known as sequential selection) maximize the information provided by the item at the cutscore, or cutscores if there are more than one, regardless of the ability of the examinee. Estimate-based methods (also known as adaptive selection) maximize information at the current estimate of examinee ability, regardless of the location of the cutscore. Both work efficiently, but the efficiency depends in part on the termination criterion employed. Because the sequential probability ratio test only evaluates probabilities near the cutscore, cutscore-based item selection is more appropriate. Because the confidence interval termination criterion is centered around the examinees ability estimate, estimate-based item selection is more appropriate. This is because the test will make a classification when the confidence interval is small enough to be completely above or below the cutscore (see below). The confidence interval will be smaller when the standard error of measurement is smaller, and the standard error of measurement will be smaller when there is more information at the theta level of the examinee.
Termination criterion
There are three termination criteria commonly used for CCTs. Bayesian decision theory methods offer great flexibility by presenting an infinite choice of loss/utility structures and evaluation considerations, but also introduce greater arbitrariness. A confidence interval approach calculates a confidence interval around the examinee's current theta estimate at each point in the test, and classifies the examinee when the interval falls completely within a region of theta that defines a classification. This was originally known as adaptive mastery testing (Kingsbury & Weiss, 1983), but does not necessarily require adaptive item selection, nor is it limited to the two-classification mastery testing situation. The sequential probability ratio test (Reckase, 1983) defines the classification problem as a hypothesis test that the examinee's theta is equal to a specified point above the cutscore or a specified point below the cutscore.
References
Bibliography of CCT research
Armitage, P. (1950). Sequential analysis with more than two alternative hypotheses, and its relation to discriminant function analysis. Journal of the Royal Statistical Society, 12, 137–144.
Braun, H., Bejar, I.I., and Williamson, D.M. (2006). Rule-based methods for automated scoring: Application in a licensing context. In Williamson, D.M., Mislevy, R.J., and Bejar, I.I. (Eds.) Automated scoring of complex tasks in computer-based testing. Mahwah, NJ: Erlbaum.
Dodd, B. G., De Ayala, R. J., & Koch, W. R. (1995). Computerized adaptive testing with polytomous items. Applied Psychological Measurement, 19, 5-22.
Eggen, T. J. H. M. (1999). Item selection in adaptive testing with the sequential probability ratio test. Applied Psychological Measurement, 23, 249–261.
Eggen, T. J. H. M, & Straetmans, G. J. J. M. (2000). Computerized adaptive testing for classifying examinees into three categories. Educational and Psychological Measurement, 60, 713–734.
Epstein, K. I., & Knerr, C. S. (1977). Applications of sequential testing procedures to performance testing. Paper presented at the 1977 Computerized Adaptive Testing Conference, Minneapolis, MN.
Ferguson, R. L. (1969). The development, implementation, and evaluation of a computer-assisted branched test for a program of individually prescribed instruction. Unpublished doctoral dissertation, University of Pittsburgh.
Frick, T. W. (1989). Bayesian adaptation during computer-based tests and computer-guided exercises. Journal of Educational Computing Research, 5, 89–114.
Frick, T. W. (1990). A comparison of three decisions models for adapting the length of computer-based mastery tests. Journal of Educational Computing Research, 6, 479–513.
Frick, T. W. (1992). Computerized adaptive mastery tests as expert systems. Journal of Educational Computing Research, 8, 187–213.
Huang, C.-Y., Kalohn, J.C., Lin, C.-J., and Spray, J. (2000). Estimating Item Parameters from Classical Indices for Item Pool Development with a Computerized Classification Test. (Research Report 2000–4). Iowa City, IA: ACT, Inc.
Jacobs-Cassuto, M.S. (2005). A Comparison of Adaptive Mastery Testing Using Testlets
With the 3-Parameter Logistic Model. Unpublished doctoral dissertation, University of Minnesota, Minneapolis, MN.
Jiao, H., & Lau, A. C. (2003). The Effects of Model Misfit in Computerized Classification Test. Paper presented at the annual meeting of the National Council of Educational Measurement, Chicago, IL, April 2003.
Jiao, H., Wang, S., & Lau, C. A. (2004). An Investigation of Two Combination Procedures of SPRT for Three-category Classification Decisions in Computerized Classification Test. Paper presented at the annual meeting of the American Educational Research Association, San Antonio, April 2004.
Kalohn, J. C., & Spray, J. A. (1999). The effect of model misspecification on classification decisions made using a computerized test. Journal of Educational Measurement, 36, 47–59.
Kingsbury, G.G., & Weiss, D.J. (1979). An adaptive testing strategy for mastery decisions. Research report 79–05. Minneapolis: University of Minnesota, Psychometric Methods Laboratory.
Kingsbury, G.G., & Weiss, D.J. (1983). A comparison of IRT-based adaptive mastery testing and a sequential mastery testing procedure. In D. J. Weiss (Ed.), New horizons in testing: Latent trait theory and computerized adaptive testing (pp. 237–254). New York: Academic Press.
Lau, C. A. (1996). Robustness of a unidimensional computerized testing mastery procedure with multidimensional testing data. Unpublished doctoral dissertation, University of Iowa, Iowa City IA.
Lau, C. A., & Wang, T. (1998). Comparing and combining dichotomous and polytomous items with SPRT procedure in computerized classification testing. Paper presented at the annual meeting of the American Educational Research Association, San Diego.
Lau, C. A., & Wang, T. (1999). Computerized classification testing under practical constraints with a polytomous model. Paper presented at the annual meeting of the American Educational Research Association, Montreal, Canada.
Lau, C. A., & Wang, T. (2000). A new item selection procedure for mixed item type in computerized classification testing. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, Louisiana.
Lewis, C., & Sheehan, K. (1990). Using Bayesian decision theory to design a computerized mastery test. Applied Psychological Measurement, 14, 367–386.
Lin, C.-J. & Spray, J.A. (2000). Effects of item-selection criteria on classification testing with the sequential probability ratio test. (Research Report 2000–8). Iowa City, IA: ACT, Inc.
Linn, R. L., Rock, D. A., & Cleary, T. A. (1972). Sequential testing for dichotomous decisions. Educational & Psychological Measurement, 32, 85–95.
Luecht, R. M. (1996). Multidimensional Computerized Adaptive Testing in a Certification or Licensure Context. Applied Psychological Measurement, 20, 389–404.
Reckase, M. D. (1983). A procedure for decision making using tailored testing. In D. J. Weiss (Ed.), New horizons in testing: Latent trait theory and computerized adaptive testing (pp. 237–254). New York: Academic Press.
Rudner, L. M. (2002). An examination of decision-theory adaptive testing procedures. Paper presented at the annual meeting of the American Educational Research Association, April 1–5, 2002, New Orleans, LA.
Sheehan, K., & Lewis, C. (1992). Computerized mastery testing with nonequivalent testlets. Applied Psychological Measurement, 16, 65–76.
Spray, J. A. (1993). Multiple-category classification using a sequential probability ratio test (Research Report 93–7). Iowa City, Iowa: ACT, Inc.
Spray, J. A., Abdel-fattah, A. A., Huang, C., and Lau, C. A. (1997). Unidimensional approximations for a computerized test when the item pool and latent space are multidimensional (Research Report 97–5). Iowa City, Iowa: ACT, Inc.
Spray, J. A., & Reckase, M. D. (1987). The effect of item parameter estimation error on decisions made using the sequential probability ratio test (Research Report 87–17). Iowa City, IA: ACT, Inc.
Spray, J. A., & Reckase, M. D. (1994). The selection of test items for decision making with a computerized adaptive test. Paper presented at the Annual Meeting of the National Council for Measurement in Education (New Orleans, LA, April 5–7, 1994).
Spray, J. A., & Reckase, M. D. (1996). Comparison of SPRT and sequential Bayes procedures for classifying examinees into two categories using a computerized test. Journal of Educational & Behavioral Statistics,21, 405–414.
Thompson, N.A. (2006). Variable-length computerized classification testing with item response theory. CLEAR Exam Review, 17(2).
Vos, H. J. (1998). Optimal sequential rules for computer-based instruction. Journal of Educational Computing Research, 19, 133–154.
Vos, H. J. (1999). Applications of Bayesian decision theory to sequential mastery testing. Journal of Educational and Behavioral Statistics, 24, 271–292.
Wald, A. (1947). Sequential analysis. New York: Wiley.
Weiss, D. J., & Kingsbury, G. G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21, 361–375.
Weissman, A. (2004). Mutual information item selection in multiple-category classification CAT. Paper presented at the Annual Meeting of the National Council for Measurement in Education, San Diego, CA.
Weitzman, R. A. (1982a). Sequential testing for selection. Applied Psychological Measurement, 6, 337–351.
Weitzman, R. A. (1982b). Use of sequential testing to prescreen prospective entrants into military service. In D. J. Weiss (Ed.), Proceedings of the 1982 Computerized Adaptive Testing Conference. Minneapolis, MN: University of Minnesota, Department of Psychology, Psychometric Methods Program, 1982.
External links
Measurement Decision Theory by Lawrence Rudner
CAT Central by David J. Weiss
Psychometrics
Computer-based testing
School examinations
|
520050
|
https://en.wikipedia.org/wiki/IRAF
|
IRAF
|
IRAF (Image Reduction and Analysis Facility) is a collection of software written at the National Optical Astronomy Observatory (NOAO) geared towards the reduction of astronomical images in pixel array form. This is primarily data taken from imaging array detectors such as CCDs. It is available for all major operating systems for mainframes and desktop computers. Although written for UNIX-like operating systems, use on Microsoft Windows is made possible by Cygwin. It is primarily used on Linux distributions, with a growing share of Mac OS X users. IRAF is installed by default in Distro Astro, a Linux distribution for astronomers.
IRAF commands (known as tasks) are organized into package structures. Additional packages may be added to IRAF. Packages may contain other packages. There are many packages available by NOAO and external developers often focusing on a particular branch of research or facility. Of particular note are the STSDAS and TABLES packages by the STScI.
Functionality available in IRAF includes the calibration of the fluxes and positions of astronomical objects within an image, compensation for sensitivity variations between detector pixels, combination of multiple images or measurement of the redshifts of absorption or emission lines in a spectrum.
Licensing
The licensing status of IRAF is conflicted but generally follows the MIT license scheme, with some older parts of the code under a different license. Several functions in the graphics infrastructure are under a non-free software license which does not permit redistribution without permission. As this code is tightly integrated into several of IRAF's tasks, the package as a whole was seen by several projects as non-redistributable and therefore non-free, and so efforts to package the software for drop-in installation in Linux systems have lapsed.
In March 2012, NOAO released v2.16 of IRAF, citing one of the "new capabilities" as "Removal of all license restrictions - IRAF is now free", and in 2013, there were efforts to create RPM Package Manager and deb packages of IRAF.
User-defined tasks
IRAF allows users to write their own tasks in two main ways. One is by writing non-compiled procedure scripts. The second is through compiled subset pre-processor (SPP) programs. Tutorial documents exist for both methods.
Technical details
A full IRAF working environment usually requires two other applications: an extended xterm window with a graphics windows (called xgterm and distributed in a separate X11-IRAF package by NOAO) and an image display program referred to as an "image server". The two most popular image servers are ds9 (by SAO) and ximtool (NOAO).
The ximtool image server supports 24-bit colors and is available for testing.
See also
Space flight simulation game
List of space flight simulation games
Planetarium software
List of observatory software
References
External links
IRAF.Net Forum
IRAF Project Homepage
IRAF on a Knoppix Live CD (ISO-File download)
IRAF installation guide for Windows via Cygwin
IRAF installation guide for Linux (Ubuntu)
Astronomy software
Cross-platform software
|
24364
|
https://en.wikipedia.org/wiki/PDP-8
|
PDP-8
|
The PDP-8 is a 12-bit minicomputer that was produced by Digital Equipment Corporation (DEC). It was the first commercially successful minicomputer, with over 50,000 units being sold over the model's lifetime. Its basic design follows the pioneering LINC but has a smaller instruction set, which is an expanded version of the PDP-5 instruction set. Similar machines from DEC are the PDP-12 which is a modernized version of the PDP-8 and LINC concepts, and the PDP-14 industrial controller system.
Overview
The earliest PDP-8 model, informally known as a "Straight-8", was introduced on 22 March 1965 priced at $18,500 (). It used diode–transistor logic packaged on flip chip cards in a machine about the size of a small household refrigerator. It was the first computer to be sold for under $20,000, making it the best-selling computer in history at that time. The Straight-8 was supplanted in 1966 by the PDP-8/S, which was available in desktop and rack-mount models. Using a one-bit serial arithmetic logic unit (ALU) allowed the PDP-8/S to be smaller and less expensive, although slower than the original PDP-8. A basic 8/S sold for under $10,000, the first machine to reach that milestone.
Later systems (the PDP-8/I and /L, the PDP-8/E, /F, and /M, and the PDP-8/A) returned to a faster, fully parallel implementation but use much less costly transistor–transistor logic (TTL) MSI logic. Most surviving PDP-8s are from this era. The PDP-8/E is common, and well-regarded because many types of I/O devices were available for it. The last commercial PDP-8 models introduced in 1979 are called "CMOS-8s", based on CMOS microprocessors. They were not priced competitively, and the offering failed. Intersil sold the integrated circuits commercially through 1982 as the Intersil 6100 family. By virtue of their CMOS technology they had low power requirements and were used in some embedded military systems.
The chief engineer who designed the initial version of the PDP-8 was Edson de Castro, who later founded Data General.
Architectural significance
The PDP-8 combines low cost, simplicity, expandability, and careful engineering for value. The greatest historical significance was that the PDP-8's low cost and high volume made a computer available to many new customers for many new uses. Its continuing significance is as a historical example of value-engineered computer design.
The low complexity brought other costs. It made programming cumbersome, as is seen in the examples in this article and from the discussion of "pages" and "fields". Much of one's code performed the required mechanics, as opposed to setting out the algorithm. For example, subtracting a number involves computing its two's complement then adding it; writing a conditional jump involves writing a conditional skip around the jump, the skip coding the condition negative to the one desired. Some ambitious programming projects failed to fit in memory or developed design defects that could not be solved. For example, as noted below, inadvertent recursion of a subroutine produces defects that are difficult to trace to the subroutine in question.
As design advances reduced the costs of logic and memory, the programmer's time became relatively more important. Subsequent computer designs emphasized ease of programming, typically using larger and more intuitive instruction sets.
Eventually, most machine code was generated by compilers and report generators. The reduced instruction set computer returned full-circle to the PDP-8's emphasis on a simple instruction set and achieving multiple actions in a single instruction cycle, in order to maximize execution speed, although the newer computers have much longer instruction words.
Description
The PDP-8 used ideas from several 12-bit predecessors, most notably the LINC designed by W.A. Clark and C.E. Molnar, who were inspired by Seymour Cray's CDC 160 minicomputer.
The PDP-8 uses 12 bits for its word size and arithmetic (on unsigned integers from 0 to 4095 or signed integers from −2048 to +2047). However, software can do multiple-precision arithmetic. An interpreter was available for floating point operations, for example, that uses a 36-bit floating point representation with a two-word (24-bit) significand (mantissa) and one-word exponent. Subject to speed and memory limitations, the PDP-8 can perform calculations similar to more expensive contemporary electronic computers, such as the IBM 1130 and various models of the IBM System/360, while being easier to interface with external devices.
The memory address space is also 12 bits, so the PDP-8's basic configuration has a main memory of 4,096 (212) twelve-bit words. An optional memory-expansion unit can switch banks of memories using an IOT instruction. The memory is magnetic-core memory with a cycle time of 1.5 microseconds (0.667 MHz), so that a typical two-cycle (Fetch, Execute) memory-reference instruction runs at a speed of 0.333 MIPS. The 1974 Pocket Reference Card for the PDP-8/E gives a basic instruction time of 1.2 microseconds, or 2.6 microseconds for instructions that reference memory.
The PDP-8 was designed in part to handle contemporary telecommunications and text. Six-bit character codes were in widespread use at the time, and the PDP-8's twelve-bit words can efficiently store two such characters. In addition, a six-bit teleprinter code called the teletypesetting or TTS code was in widespread use by the news wire services, and an early application for the PDP-8 was typesetting using this code.
PDP-8 instructions have a 3-bit opcode, so there are only eight instructions. The assembler provides more instruction mnemonics to a programmer by translating I/O and operate-mode instructions to combinations of the op-codes and instruction fields. It also has only three programmer-visible registers: A 12-bit accumulator (AC), a program counter (PC), and a carry flag called the "link register" (L).
For input and output, the PDP-8 has a single interrupt shared by all devices, an I/O bus accessed by I/O instructions and a direct memory access (DMA) channel. The programmed I/O bus typically runs low to medium-speed peripherals, such as printers, teletypes, paper tape punches and readers, while DMA is used for cathode ray tube screens with a light pen, analog-to-digital converters, digital-to-analog converters, tape drives, and disk drives.
To save money, the design uses inexpensive main memory for many purposes that are served by more expensive flip-flop registers in other computers, such as auxiliary counters and subroutine linkage.
Basic models use software to do multiplication and division. For faster math, the Extended Arithmetic Element (EAE) provides multiply and divide instructions with an additional register, the Multiplier/Quotient (MQ) register. The EAE was an option on the original PDP-8, the 8/I, and the 8/E, but it is an integral part of the Intersil 6100 microprocessor.
The PDP-8 is optimized for simplicity of design. Compared to more complex machines, unnecessary features were removed and logic is shared when possible. Instructions use autoincrement, autoclear, and indirect access to increase the software's speed, reduce memory use, and substitute inexpensive memory for expensive registers.
The electronics of a basic PDP-8 CPU has only four 12-bit registers: the accumulator, program counter, memory-buffer register, and memory-address register. To save money, these served multiple purposes at different points in the operating cycle. For example, the memory buffer register provides arithmetic operands, is part of the instruction register, and stores data to rewrite the core memory. (This restores the core data destroyed by the read.)
Because of their simplicity, early PDP-8 models were less expensive than most other commercially available computers. However, they used costly production methods often used for prototypes. They used thousands of very small, standardized logic-modules, with gold connectors, integrated by a costly, complex wire-wrapped backplane in a large cabinet.
In the later 8/S model, introduced in August 1966, two different logic voltages increased the fan-out of the inexpensive diode–transistor logic. The 8/S also reduced the number of logic gates by using a serial, single-bit-wide data path to do arithmetic. The CPU of the PDP-8/S has only about 519 logic gates. In comparison, small microcontrollers (as of 2008) usually have 15,000 or more. The reductions in the electronics permitted a much smaller case, about the size of a bread-box. The 8/S was designed by Saul Dinman.
The even later PDP-8/E is a larger, more capable computer, but further reengineered for better value. It employs faster transistor–transistor logic, in integrated circuits. The core memory was redesigned. It allows expansion with less expense because it uses the OMNIBUS in place of the wire-wrapped backplane on earlier models. (A personal account of the development of the PDP-8/E can be read on the Engineering and Technology History Wiki.)
Versions of the PDP-8
The total sales figure for the PDP-8 family has been estimated at over 300,000 machines. The following models were manufactured:
Latter-day implementations
The PDP-8 is readily emulated, as its instruction set is much simpler than modern architectures. Enthusiasts have created entire PDP-8s using single FPGA devices.
Several software simulations of a PDP-8 are available on the Internet, as well as open-source hardware re-implementations. The best of these correctly execute DEC's operating systems and diagnostic software. The software simulations often simulate late-model PDP-8s with all possible peripherals. Even these use only a tiny fraction of the capacity of a modern personal computer.
One of the first commercial versions of a PDP-8/S virtual machine ran on a Kaypro 386 (an 80386-based computer) and was written in the C computer language (before the ANSI-C standard was finalized) and assembler by David Beecher of Denver, Colorado. It replaced a failing PDP-8/S computer that operated the fuel handling machine at Reactor #85, the Platteville, Colorado Nuclear Fuel powered Electric Generating Station, Ft. St. Vrain. It was reviewed by Rockwell International and performed flawlessly for 2.5 years during the operation of the Fuel Handling Machine while it was used to remove fuel from the reactor core and decommission the plant. It included a simulated paper tape loader and front panel.
Input/output
The I/O systems underwent huge changes during the PDP-8 era. Early PDP-8 models use a front panel interface, a paper-tape reader and a teletype printer with an optional paper-tape punch. Over time, I/O systems such as magnetic tape, RS-232 and current loop dumb terminals, punched card readers, and fixed-head disks were added. Toward the end of the PDP-8 era, floppy disks and moving-head cartridge disk drives were popular I/O devices. Modern enthusiasts have created standard PC style IDE hard disk adapters for real and simulated PDP-8 computers.
Several types of I/O are supported:
In-backplane dedicated slots for I/O controllers
A "Negative" I/O bus (using negative voltage signalling)
A "Positive" I/O bus (the same architecture using TTL signalling)
The Omnibus (a backplane of undedicated system bus slots) introduced in the PDP-8/E. (Details are described in the referenced IEEE article listed below.)
A simplified, inexpensive form of DMA called "three-cycle data break" is supported; this requires the assistance of the processor. The "data break" method moves some of common logic needed to implement DMA I/O from each I/O device into one common copy of the logic within the processor. "Data break" places the processor in charge of maintaining the DMA address and word count registers. In three successive memory cycles, the processor updates the word count, updates the transfer address, and stores or retrieves the actual I/O data word.
One-cycle data break effectively triples the DMA transfer rate because only the target data needed to be transferred to and from the core memory. However, the I/O devices need more electronic logic to manage their own word count and transfer address registers. By the time the PDP-8/E was introduced, electronic logic had become less expensive and "one-cycle data break" became more popular.
Programming facilities
Early PDP-8 systems did not have an operating system, just a front panel with run and halt switches. Software development systems for the PDP-8 series began with the most basic front-panel entry of raw binary machine code (booting entry).
In the middle era, various paper tape "operating systems" were developed. Many utility programs became available on paper tape. PAL-8 assembly language source code was often stored on paper tape, read into memory, and saved to paper tape. PAL assembled from paper tape into memory. Paper tape versions of a number of programming languages were available, including DEC's FOCAL interpreter and a 4K FORTRAN compiler and runtime.
Toward the end of the PDP-8 era, operating systems such as OS/8 and COS-310 allowed a traditional line mode editor and command-line compiler development system using languages such as PAL-III assembly language, FORTRAN, BASIC, and DIBOL.
Fairly modern and advanced real-time operating system (RTOS) and preemptive multitasking multi-user systems were available: a real-time system (RTS-8) was available as were multiuser commercial systems (COS-300 and COS-310) and a dedicated single-user word-processing system (WPS-8).
A time-sharing system, TSS-8, was also available. TSS-8 allows multiple users to log into the system via 110-baud terminals, and edit, compile and debug programs. Languages include a special version of BASIC, a FORTRAN subset similar to FORTRAN-1 (no user-written subroutines or functions), an ALGOL subset, FOCAL, and an assembler called PAL-D.
A fair amount of user-donated software for the PDP-8 was available from DECUS, the Digital Equipment Corporation User Society, and often came with full source listings and documentation.
Instruction set
The three high-order bits of the 12-bit instruction word (labelled bits 0 through 2) are the operation code. For the six operations that refer to memory, bits 5 through 11 provide a 7-bit address. Bit 4, if set, says to complete the address using the 5 high-order bits of the program counter (PC) register, meaning that the addressed location was within the same 128 words as the instruction. If bit 4 is clear, zeroes are used, so the addressed location is within the first 128 words of memory. Bit 3 specifies indirection; if set, the address obtained as described so far points to a 12-bit value in memory that gives the actual effective address for the instruction; this allows operands to be anywhere in memory at the expense of an additional word. The JMP instruction does not operate on a memory word, except if indirection is specified, but has the same bit fields.
Memory pages
This use of the instruction word divides the 4,096-word memory into 128-word pages; bit 4 of the instruction selects either the current page or page 0 (addresses 0000–0177 in octal). Memory in page 0 is at a premium, since variables placed here can be addressed directly from any page. (Moreover, address 0000 is where any interrupt service routine must start, and addresses 0010–0017 have the special property of auto-incrementing preceding any indirect reference through them.)
The standard assembler places constant values for arithmetic in the current page. Likewise, cross-page jumps and subroutine calls use an indirect address in the current page.
It was important to write routines to fit within 128-word pages, or to arrange routines to minimize page transitions, as references and jumps outside the current page require an extra word. Consequently, much time was spent cleverly conserving one or several words. Programmers deliberately placed code at the end of a page to achieve a free transition to the next page as PC was incremented.
Basic instructions
000 – AND – AND the memory operand with AC.
001 – TAD – Two's complement ADd the memory operand to <L,AC> (a 12 bit signed value (AC) w. carry in L).
010 – ISZ – Increment the memory operand and Skip next instruction if result is Zero.
011 – DCA – Deposit AC into the memory operand and Clear AC.
100 – JMS – JuMp to Subroutine (storing return address in first word of subroutine!).
101 – JMP – JuMP.
110 – IOT – Input/Output Transfer (see below).
111 – OPR – microcoded OPeRations (see below).
IOT (Input-Output Transfer) instructions
The PDP-8 processor defined few of the IOT instructions, but simply provided a framework. Most IOT instructions were defined by the individual I/O devices.
Device
Bits 3 through 8 of an IOT instruction select an I/O device. Some of these device addresses are standardized by convention:
00 is handled by the processor and not sent to any I/O device (see below).
01 is usually the high-speed paper tape reader.
02 is the high-speed paper tape punch.
03 is the console keyboard (and any associated low-speed paper tape reader).
04 is the console printer (and any associated low-speed paper tape punch).
Instructions for device 0 affect the processor as a whole. For example, ION (6001) enables interrupt processing, and IOFF (6002) disables it.
Function
Bits 9 through 11 of an IOT instruction select the function(s) the device performs. Simple devices (such as the paper tape reader and punch and the console keyboard and printer) use the bits in standard ways:
Bit 11 causes the processor to skip the next instruction if the I/O device is ready.
Bit 10 clears AC.
Bit 9 moves a word between AC and the device, initiates another I/O transfer, and clears the device's "ready" flag.
These operations take place in a well-defined order that gives useful results if more than one bit is set.
More complicated devices, such as disk drives, use these 3 bits in device-specific fashions. Typically, a device decodes the 3 bits to give 8 possible function codes.
OPR (OPeRate)
Many operations are achieved using OPR, including most of the conditionals. OPR does not address a memory location; conditional execution is achieved by conditionally skipping one instruction, which is typically a JMP.
The OPR instruction was said to be "microcoded." This did not mean what the word means today (that a lower-level program fetched and interpreted the OPR instruction), but meant that each bit of the instruction word specifies a certain action, and the programmer could achieve several actions in a single instruction cycle by setting multiple bits. In use, a programmer can write several instruction mnemonics alongside one another, and the assembler combines them with OR to devise the actual instruction word. Many I/O devices support "microcoded" IOT instructions.
Microcoded actions take place in a well-defined sequence designed to maximize the utility of many combinations.
The OPR instructions come in Groups. Bits 3, 8 and 11 identify the Group of an OPR instruction, so it is impossible to combine the microcoded actions from different groups.
Group 1
00 01 02 03 04 05 06 07 08 09 10 11
___
| 1| 1| 1| 0| | | | | | | | |
|__|__|__|__|__|__|__|__|__|__|__|__|
|CLA CMA RAR BSW
CLL CML RAL IAC
Execution order 1 1 2 2 4 4 4 3
7200 – CLA – Clear Accumulator
7100 – CLL – Clear the L Bit
7040 – CMA – Ones Complement Accumulator
7020 – CML – Complement L Bit
7001 – IAC – Increment <L,AC>
7010 – RAR – Rotate <L,AC> Right
7004 – RAL – Rotate <L,AC> Left
7012 – RTR – Rotate <L,AC> Right Twice
7006 – RTL – Rotate <L,AC> Left Twice
7002 – BSW – Byte Swap 6-bit "bytes" (PDP 8/e and up)
In most cases, the operations are sequenced so that they can be combined in the most useful ways. For example, combining CLA (CLear Accumulator), CLL (CLear Link), and IAC (Increment ACcumulator) first clears the AC and Link, then increments the accumulator, leaving it set to 1. Adding RAL to the mix (so CLA CLL IAC RAL) causes the accumulator to be cleared, incremented, then rotated left, leaving it set to 2. In this way, small integer constants were placed in the accumulator with a single instruction.
The combination CMA IAC, which the assembler lets you abbreviate as CIA, produces the arithmetic inverse of AC: the twos-complement negation. Since there is no subtraction instruction, only the twos-complement add (TAD), computing the difference of two operands, requires first negating the subtrahend.
A Group 1 OPR instruction that has none of the microprogrammed bits set performs no action. The programmer can write NOP (No Operation) to assemble such an instruction.
Group 2, Or Group
00 01 02 03 04 05 06 07 08 09 10 11
___
| 1| 1| 1| 1| | | | | 0| | | 0|
|__|__|__|__|__|__|__|__|__|__|__|__|
|CLA SZA OSR
SMA SNL HLT
2 1 1 1 3 3
7600 – CLA – Clear AC
7500 – SMA – Skip on AC < 0 (or group)
7440 – SZA – Skip on AC = 0 (or group)
7420 – SNL – Skip on L ≠ 0 (or group)
7404 – OSR – logically 'or' front-panel switches with AC
7402 – HLT – Halt
When bit 8 is clear, a skip is performed if any of the specified conditions are true. For example, "SMA SZA", opcode 7540, skips if AC ≤ 0.
A Group 2 OPR instruction that has none of the microprogrammed bits set is another No-Op instruction.
Group 2, And Group
00 01 02 03 04 05 06 07 08 09 10 11
___
| 1| 1| 1| 1| | | | | 1| | | 0|
|__|__|__|__|__|__|__|__|__|__|__|__|
|CLA SNA OSR
SPA SZL HLT
2 1 1 1 3 2
7410 – SKP – Skip Unconditionally
7610 – CLA – Clear AC
7510 – SPA – Skip on AC ≥ 0 (and group)
7450 – SNA – Skip on AC ≠ 0 (and group)
7430 – SZL – Skip on L = 0 (and group)
When bit 8 is set, the Group 2, Or skip condition is inverted, via De Morgan's laws: the skip is not performed if any of the group 2, Or conditions are true, meaning that all of the specified skip conditions must be true. For example, "SPA SNA", opcode 7550, skips if AC > 0. If none of bits 5–7 are set, then the skip is unconditional.
Group 3
Unused bit combinations of OPR are defined as a third Group of microprogrammed actions mostly affecting the MQ (Multiplier/Quotient) register. The MQ register and the extended arithmetic element (EAE) instructions are optional and only exist when EAE option was purchased.
00 01 02 03 04 05 06 07 08 09 10 11
___
| 1| 1| 1| 1| | | | | | | | 1|
|__|__|__|__|__|__|__|__|__|__|__|__|
|CLA SCA \_ _/
| MQA MQL CODE
1* 2 2 2 3
7601 – CLA – Clear AC
7501 – MQA – Multiplier Quotient with AC (logical or MQ into AC)
7441 – SCA – Step counter load into AC
7421 – MQL – Multiplier Quotient Load (Transfer AC to MQ, clear AC)
7621 – CAM – CLA + MQL clears both AC and MQ.
Typically CLA and MQA were combined to transfer MQ into AC. Another useful combination is MQA and MQL, to exchange the two registers.
Three bits specified a multiply/divide instruction to perform:
7401 – No operation
7403 – SCL – Step Counter Load (immediate word follows, PDP-8/I and up)
7405 – MUY – Multiply
7407 – DVI – Divide
7411 – NMI – Normalize
7413 – SHL – Shift left (immediate word follows)
7415 – ASR – Arithmetic shift right
7417 – LSR – Logical shift right
Memory control
A 12-bit word can have 4,096 different values, and this is the maximum number of words the original PDP-8 can address indirectly through a word pointer. 4,096 12-bit words represent 6,144 bytes in modern terminology, or 6 kB. As programs became more complex and the price of memory fell, it became desirable to expand this limit.
To maintain compatibility with pre-existing programs, new hardware outside the original design added high-order bits to the effective addresses generated by the program. The Memory Extension Controller expands the addressable memory by a factor of 8, to a total of 32,768 words. This expansion was thought sufficient because, with core memory then costing about 50 cents a word, a full 32K of memory would equal the cost of the CPU.
Each 4K of memory is called a field. The Memory Extension Controller contains two three-bit registers: the DF (Data Field) and the IF (Instruction Field). These registers specify a field for each memory reference of the CPU, allowing a total of 15 bits of address. The IF register specifies the field for instruction fetches and direct memory references; the DF register specifies the field for indirect data accesses. A program running in one field can reference data in the same field by direct addressing, and reference data in another field by indirect addressing.
A set of I/O instructions in the range 6200 through 6277 is handled by the Memory Extension Controller and give access to the DF and IF registers. The 62X1 instruction (CDF, Change Data Field) set the data field to X. Similarly 62X2 (CIF) set the instruction field, and 62X3 set both. Pre-existing programs would never execute CIF or CDF; the DF and IF registers would both point to the same field, a single field to which these programs were limited. The effect of the CIF instruction was deferred to coincide with the next JMP or JMS instruction, so that executing CIF would not cause a jump.
It was more complicated for multiple-field programs to deal with field boundaries and the DF and IF registers than it would have been if they could simply generate 15-bit addresses, but the design provided backward compatibility and is consistent with the 12-bit architecture used throughout the PDP-8. Compare the later Intel 8086, whose 16-bit memory addresses are expanded to 20 bits by combining them with the contents of a specified or implied segment register.
The extended memory scheme let existing programs handle increased memory with minimal changes. For example, 4K FOCAL normally had about 3K of code with only 1K left over for user program and data. With a few patches, FOCAL could use a second 4K field for user program and data. Moreover, additional 4K fields could be allocated to separate users, turning 4K FOCAL into a multi-user timesharing system.
Virtualization
On the PDP-8/E and later models, the Memory Extension Controller was enhanced to enable machine virtualization. A program written to use a PDP-8's entire resources can coexist with other such programs on the same PDP-8 under the control of a virtual machine manager. The manager can make all I/O instructions (including those that operated on the Memory Extension Controller) cause a trap (an interrupt handled by the manager). In this way, the manager can map memory references, map data or instruction fields, and redirect I/O to different devices. Each original program has complete access to a "virtual machine" provided by the manager.
New I/O instructions to the Memory Extension Controller retrieve the current value of the data and instruction fields, letting software save and restore most of the machine state across a trap. However, a program can not sense whether the CPU is in the process of deferring the effect of a CIF instruction (whether it has executed a CIF and not yet executed the matching jump instruction). The manager has to include a complete PDP-8 emulator (not difficult for an 8-instruction machine). Whenever a CIF instruction traps to the manager, it has to emulate the instructions up to the next jump. Fortunately, as a jump usually is the next instruction after CIF, this emulation does not slow programs down much, but it is a large workaround to a seemingly small design deficiency.
By the time of the PDP-8/A, memory prices had fallen enough that memory exceeding 32K was desirable. The 8/A added a new set of instructions for handling more than eight fields of memory. The field number could now be placed in the AC, rather than hard-coded into the instruction. However, by this time, the PDP-8 was in decline, so very little standard software was modified to use these new features.
Examples
The following examples show code in PDP-8 assembly language as one might write for the PAL-III assembler.
Comparing two numbers
The following piece of code shows what is needed just to compare two numbers:
/Compare numbers in memory at OPD1 and OPD2
CLA CLL /Must start with 0 in AC and link
TAD OPD1 /Load first operand into AC (by adding it to 0); link is still clear
CIA /Complement, then increment AC, negating it
TAD OPD2 /AC now has OPD2-OPD1; if OPD2≥OPD1, sum overflows and link is set
SZL /Skip if link is clear
JMP OP2GT /Jump somewhere in the case that OPD2≥OPD1;
/Otherwise, fall through to code below.
As shown, much of the text of a typical PDP-8 program focuses not on the author's intended algorithm but on low-level mechanics. An additional readability problem is that in conditional jumps such as the one shown above, the conditional instruction (which skips around the JMP) highlights the opposite of the condition of interest.
String output
This complete PDP-8 assembly language program outputs "Hello, world!" to the teleprinter.
*10 / Set current assembly origin to address 10,
STPTR, STRNG-1 / An auto-increment register (one of eight at 10-17)
*200 / Set current assembly origin to program text area
HELLO, CLA CLL / Clear AC and Link again (needed when we loop back from tls)
TAD I Z STPTR / Get next character, indirect via PRE-auto-increment address from the zero page
SNA / Skip if non-zero (not end of string)
HLT / Else halt on zero (end of string)
TLS / Output the character in the AC to the teleprinter
TSF / Skip if teleprinter ready for character
JMP .-1 / Else jump back and try again
JMP HELLO / Jump back for the next character
STRNG, 310 / H
345 / e
354 / l
354 / l
357 / o
254 /,
240 / (space)
367 / w
357 / o
362 / r
354 / l
344 / d
241 / !
0 / End of string
$HELLO /DEFAULT TERMINATOR
Subroutines
The PDP-8 processor does not implement a stack upon which to store registers or other context when a subroutine is called or an interrupt occurs. (A stack can be implemented in software, as demonstrated in the next section.) Instead, the JMS instruction simply stores the updated PC (pointing past JMS, to the return address) at the effective address and jumps to the effective address plus one. The subroutine returned to its caller using an indirect JMP instruction that addresses the subroutine's first word.
For example, here is "Hello, World!" re-written to use a subroutine. When the JMS instruction jumps to the subroutine, it modifies the 0 coded at location OUT1:
*10 / Set current assembly origin to address 10,
STPTR, STRNG-1 / An auto-increment register (one of eight at 10-17)
*200 / Set assembly origin (load address)
LOOP, TAD I STPTR / Pre-increment mem location 10, fetch indirect to get the next character of our message
SNA / Skip on non-zero AC
HLT / Else halt at end of message
JMS OUT1 / Write out one character
JMP LOOP / And loop back for more
OUT1, 0 / Will be replaced by caller's updated PC
TSF / Skip if printer ready
JMP .-1 / Wait for flag
TLS / Send the character in the AC
CLA CLL / Clear AC and Link for next pass
JMP I OUT1 / Return to caller
STRNG, "H / A well-known message
"e /
"l / NOTE:
"l /
"o / Strings in PAL-8 and PAL-III were "sixbit"
", / To use ASCII, we spell it out, character by character
" /
"w /
"o /
"r /
"l /
"d /
"! /
015 /
012 /
0 / Mark the end of our null-terminated string (.ASCIZ hadn't been invented yet!)
The fact that the JMS instruction uses the word just before the code of the subroutine to deposit the return address prevents reentrancy and recursion without additional work by the programmer. It also makes it difficult to use ROM with the PDP-8 because read-write return-address storage is commingled with read-only code storage in the address space. Programs intended to be placed into ROMs approach this problem in several ways:
They copy themselves to read-write memory before execution, or
They are placed into special ROM cards that provide a few words of read/write memory, accessed indirectly through the use of a thirteenth flag bit in each ROM word.
They avoid the use of subroutines; or use code such as the following, instead of the JMS instruction, to put the return address in read-write memory:
JUMPL, DCA TEMP / Deposit the accumulator in some temporary location
TAD JUMPL+3 / Load the return address into the accumulator: hard coded
JMP SUBRO / Go to the subroutine, and have it handle jumping back (to JUMPL+3)
The use of the JMS instruction makes debugging difficult. If a programmer makes the mistake of having a subroutine call itself, directly or by an intermediate subroutine, then the return address for the outer call is destroyed by the return address of the subsequent call, leading to an infinite loop. If one module is coded with an incorrect or obsolete address for a subroutine, it would not just fail to execute the entire code sequence of the subroutine, it might modify a word of the subroutine's code, depositing a return address that the processor might interpret as an instruction during a subsequent correct call to the subroutine. Both types of error might become evident during the execution of code that was written correctly.
Software stack
Though the PDP-8 does not have a hardware stack, stacks can be implemented in software.
Here are example PUSH and POP subroutines, simplified to omit issues such as testing for stack overflow and underflow:
*100 /make routines accessible for next example
PUSH, 0
DCA DATA
CLA CMA / -1
TAD SP
DCA SP
TAD DATA
DCA I SP
JMP I PUSH /Return
POP, 0
CLA CLL
TAD I SP
ISZ SP
JMP I POP
DATA, 0
SP, 0
And here is "Hello World" with this "stack" implemented, and "OUT" subroutine:
*200
MAIN, CLA CLL /Set the message pointer
TAD (MESSG /To the beginning of the message (literal)
DCA SP
LOOP, JMS POP
SNA /Stop execution if zero
HLT
JMS OUT /Otherwise, output a character
JMP LOOP
MESSG, "H
"e
"l
"l
"o
",
"
"w
"o
"r
"l
"d
"!
015
012
0
OUT, 0 / Will be replaced by caller's updated PC
TSF / Skip if printer ready
JMP .-1 / Wait for flag
TLS / Send the character in the AC
CLA CLL / Clear AC and Link for next pass
JMP I OUT / Return to caller
Linked list
Another possible subroutine for the PDP-8 is a linked list.
GETN, 0 /Gets the number pointed to and moves the pointer
CLA CLL /Clear accumulator
TAD I PTR /Gets the number pointed to
DCA TEMP /Save current value
ISZ PTR /Increment pointer
TAD I PTR /Get next address
DCA PTR /Put in pointer
JMP I GETN /return
PTR, 0
TEMP, 0
Interrupts
There is a single interrupt line on the PDP-8 I/O bus. The processor handles any interrupt by disabling further interrupts and executing a JMS to location 0000. As it is difficult to write reentrant subroutines, it is difficult to nest interrupts and this is usually not done; each interrupt runs to completion and re-enables interrupts just before executing the JMP I 0 instruction that returns from the interrupt.
Because there is only a single interrupt line on the I/O bus, the occurrence of an interrupt does not inform the processor of the source of the interrupt. Instead, the interrupt service routine has to serially poll each active I/O device to see if it is the source. The code that does this is called a skip chain because it consists of a series of PDP-8 "test and skip if flag set" I/O instructions. (It was not unheard-of for a skip chain to reach its end without finding any device in need of service.) The relative interrupt priority of the I/O devices is determined by their position in the skip chain: If several devices interrupt, the device tested earlier in the skip chain is serviced first.
Books
An engineering textbook popular in the 1980s, The Art of Digital Design by David Winkel and Franklin Prosser, contains an example problem spanning several chapters in which the authors demonstrate the process of designing a computer that is compatible with the PDP-8/I. The function of every component is explained. Although it is not a production design, as it uses more modern SSI and MSI components and solid state rather than core memory, the exercise provides a detailed description of the computer's operation.
Unlicensed clones
The USSR produced the minicomputers Saratov-1 and Saratov-2, which cloned the PDP-8 and PDP-8/E, respectively.
References
C. Gordon Bell and Allen Newell, 1971, Computer Structures: Readings and Examples, McGraw-Hill Book Company, New York. Chapter 5 The DEC PDP-8, pages 120–136. With enough detail that an electrical engineer could build one (if able to find the parts).
External links
pdp-8 Documentation: The Small Computer Handbook (1966 Edition), Sections 1 and 2 and others are available from Simon Fraser University
http://homepage.cs.uiowa.edu/~jones/pdp8/
pdp8online.com has a running PDP8 that anyone can control through a Java applet, plus a webcam to show the results.
dpa, a portable PDP-8 cross-assembler
Spare Time Gizmos' SBC6120 PDP-8 compatible computer with optional front panel
Still working Classic 8, PDP 8e and 8i in a German computer museum
Bernhard Baehr's slick PDP-8/E Simulator for Macintosh
Willem van der Mark's PDP-8/E Simulator in Java
http://simh.trailing-edge.com a very portable simulator for PDP-8, that works on virtually any modern OS.
PiDP-8 open-source replica of the PDP-8 using a Raspberry Pi running SIMH attached to a PDP-8 front panel replica.
The Digital Equipment Corporation PDP-8, 1965Computer History Collection from the Smithsonian
Historic application of PDP8 in Germany for all Deutsche Bank centers and other financial institutes: Olympia Multiplex 80 (Olympia Business Systems)
A guide to the preservation and restoration of PDP-8 computers
Digital Equipment Corporation's PDP-8 Steve Gibson's explanation on how the PDP-8 works and how to program it.
YouTube has a video series showing the PDP-8.
DEC minicomputers
Transistorized computers
Instruction set architectures
12-bit computers
Computer-related introductions in 1965
|
53966286
|
https://en.wikipedia.org/wiki/Paul%20Oliver%20v.%20Samuel%20K.%20Boateng
|
Paul Oliver v. Samuel K. Boateng
|
Paul Oliver v. Samuel K. Boateng was a ground-breaking case concerning copyright law in Ghana by the High Court of Justice. It reaffirmed the laws of Copyright relating to the requirements of copyright protection and the law relating to authorship in Ghana. This case elaborated the fact that the law of Copyright in Ghana is a creature of Statute and set out some major general principles in Copyright Law in Ghana.
The case involved Mr. Paul Oliver, a programmer, who claimed ownership of Copyright in two versions of a Rural Banking software called Rural Banker and E- finance. Paul Oliver sued Samuel Boateng and the second defendant Victor Gbehodor for licensing his software to other rural banks without his permission.
The court in this case, relying heavily on Statute (in this case, the Copyright Act of 2005), emphasized how statute dependent the Law of Copyright in Ghana is. The case also made reference to the various subject matter that are not Copyright-able in Ghana throwing a specific light on Ideas. It made it clear that a person who expresses the idea in a concrete form will be the Author of a work. Also, this case made some decisions regarding may be referred to as a joint Author and also threw some light on Infringement of Copyright, Damages and also highlighted the existence of idea/expression merger scenarios.
Facts
The plaintiff, Paul Oliver, worked for Ananse Systems, where he created the first two versions of a Banking software alongside an ex-Banker, Samuel K. Boateng who was also engaged with Ananse systems. The plaintiff went to the U.K and released the third version of the Banking software called The Rural Banker, a comprehensive banking system fully integrated into an accounting software he had also developed which was later succeeded by a 4th version called E- finance. The plaintiff met the 1st defendant again, who at that time had left Ananse systems to try market the new version of the banking software that he had created. The 1st defendant registered a business under the name BSL Systems for that exact purpose. The parties subsequently fell out with each other and a dispute arose.
The major part of departure and which was the fulcrum of the case was the authorship and ownership of the copyrights in the last two versions (4th and 3rd versions) of the banking software. The plaintiff contended that the agreement that existed between them entailed the 1st defendant would market the software and the 1st defendant will in turn, license the software to the banks. This was the same arrangement he had with Ananse Systems. The plaintiff also claimed that he issued invoices to the defendants which the defendants sent him money as license fee in return and the defendants transferred the money because they acknowledged that he was the sole author.
The defendants countered by claiming that the plaintiff had never been the sole author of the rural banking software in its various versions and that the 1st defendant pleaded that even when he was engaged in Ananse Systems, he was there, "with a view of developing the Rural Banking Software" because he was a professional Banker with vast working experience. Therefore, it was his ideas which constituted he substructure for the Banking Software created by the plaintiff and that the plaintiff contributed only to the software by designing the software at the direction of the 1st defendant and just provided the source and object codes for this software.
The defendants therefore claimed that both the Rural Banker and E- finance were jointly authored by the plaintiff and the defendants after the formation of BSL Systems, only sent the plaintiff money dubbed "licensing fee" because of Bank of Ghana regulations. They subsequently transferred the money because the plaintiff retained the activation codes of the software and used it as a bargaining chip to compel the first defendant to pay him balances after the deduction of overhead cost. The defendants continued to distribute the software without his permission after their relationship got terminated and the plaintiff's first head of claim was a demand for outstanding license fee for distribution of the software without consent and for infringement of copyright.
Judgement
The court ultimately held that the plaintiff was the sole author of the rural Banker and e-Finance software and that the defendant's use and licensing of these software without his permission following the termination of their partnership amounted to an infringement of the plaintiff's copyright. The Court again cited the copyright Act and emphasized the fact that the author of a work has exclusive right in respect of the work and that the copy right in that work should not be contested. The court granted the plaintiff money for unpaid bills and granted the plaintiffs damages for infringement and stated that the defendants should reinstate the plaintiff by disgorging all the income they enjoyed from licensing the plaintiff's software from the date of infringement.
Significance
Who is an Author?
The case in its proceedings stated the definition of an author and highlighted who an Author was. Justice Gertrude Torkornoo stated, "it is the creator of copyrighted material who qualifies to be the and identified as an author with protected rights"
Joint Authorship
The case mentioned that copyright is only granted to the authors of a work and also gave the definition of joint Authorship as follows, "a work created by two or more Authors in collaboration, in which individual contributions are indistinguishable from each other". This case therefore, sets the requirements of Joint Authorship in Ghana which include, independent contribution, Collaboration by authors claiming joint authorship and that their contributions should be indistinguishable. The court in this case, went on to say that in order to satisfy these requirements set in Section 77 of Act 690, a person claiming joint authorship should be able to again satisfy 3 proper questions. Firstly, did each claimant contribute directly to the creation of the work? Followed by, was there a mutual intention of two parties to joint author the work? And finally, is their individual work so woven into a whole that the work would lack the current identity if one person's contribution is taken out? If all the questions can be answered positively, two parties can then successfully claim joint authorship.
Copyright and ideas
The case stated in it that one of the fundamental principles of copyright law in Ghana is that ideas, concepts, methods, procedures or things of similar nature cannot be copyrighted. This statement is reaffirmed in section 2 of the Copyrights Act which states that, "Copyright shall not extend to ideas, concepts, procedures, methods or other things of a similar nature." Thus, the statement of the 1st defendant that he jointly authored the software because he jointly provided the ideas used in creating it does not make him a joint author. Justice Gertrude Torkornoo then stated that, "the one who provides the idea does not walk in the same shoes as the one who expresses the idea".
These points emphasise the fact that it is the only the person who expresses an idea in a concrete form is entitled to copyright hence the judge's statement, "...no matter how brilliant the outline of ideas generated by anyone, it is not till those ideas are expressed in a particular concrete form that copyright law may be invoked, and it is only the person who expressed those ideas in the particular concrete form that is identified as the author of the expression"
Originality
Originality was mentioned slightly in this case. This supported the provisions in Section 2 of the Copyright Act, 2005. In order for a work to enjoy copyright, the skill, labour and Judgement required for creating the copyrighted expression must be original and further asserts that the word "original", does not mean new or novel but that the creative work must originate from the author.
Idea/expression Merger
The court referred to the case of Baker V Selden which saw the United States Supreme Court hold that in situations where an idea merges with its expression such that the idea can only be expressed in only one form and that the idea cannot be expressed in any other form but that form, the law will still not give copyright to the originator of the work.
Infringement
Infringement as seen in Section 41 of the Copyright Act is also elaborated in this case. Section 41 made it clear that nobody is permitted to perform acts contrary to the rights of an Author. The defendant in this case, by licensing the plaintiff's software to Rural Banks following the plaintiff's termination of his partnership with them without his permission amounted to acts contrary to the rights of the plaintiff and this therefore, constituted an infringement. This case therefore highlights a major way by which an author's copyright may be infringed.
Damages
Damages are normally awarded as compensation for an infringement of copyright. This above point was showcased in this case when Justice Gertrude Torkorno said that the court must consider damages after infringement had been found. The damages however, must be fair and reasonably when she again stated that, "the damages which the other party ought to receive in respect of such breach of contract should be such as may fairly and reasonably be considered as either arising naturally."
Essence of Accounts
This case also supports the fact courts will usually order a defendant to submit his accounts to determine damages. Justice Gertrude Torkorno ordered Boateng to submit a list," of all persons to whom they have licensed the plaintiff's software Rural Banker and e-finance since April 2011 when plaintiff severed his relationship with them. They are further to file an account of monies received from every entity they have licensed the software to since April 2011 ". This is done in case the sum gained by the defendant is higher than the damages awarded to the plaintiff.
Injunction
An injunction was placed on the defendants, their representatives, agents and assigns. They were perpetually restrained from dealing with, representing themselves as authors and or persons with authority to license, market or distribute the plaintiff's software. Injunctions serve as one of the civil remedies available to a copyright owner to protect his interests from further exploitation. This suit therefore exhibited the existence of injunctions as a way further protecting the economic rights of an author.
References
2012 in law
Ghanaian copyright case law
|
6198367
|
https://en.wikipedia.org/wiki/Cholo%20%28video%20game%29
|
Cholo (video game)
|
Cholo is a video game released in 1986 for the BBC Micro. It was ported to the ZX Spectrum, Amstrad CPC, and Commodore 64. Cholo uses wireframe 3D visuals and has nonlinear gameplay .
Gameplay
The story is set out in a novella which was included in the game's packaging. Following a nuclear war, humanity is trapped underground by a robot defence system that rules the irradiated surface. Your character assumes control of a robot drone, transmitting to a terminal below ground, and is given the task of freeing the trapped humans.
The robot - "Rizzo the rat", a diagnostic model - is equipped with a single laser, computer/robot link capabilities and very limited armour. The player's first task is to explore the city and take over stronger robots in order to complete the mission. Gameplay consists of movement around a virtual 3D world, taking over other robots by shooting them until 'paralysed', running into them and entering a password to gain access.
Each robot has different properties. "Aviata" is an aircraft who can fly, and transport other robots; "Igor" is a hacker who can access computer systems. The player can only control one robot at a time. All robots have four slots for 'rampacks' which are essentially files, either text files or programs which add extra functionality to your robot. The gameplay often involves swapping between robots in order to complete a certain task.
A deliberately incomplete map showing the pre-war city shipped with the game in the form of an A3 size poster. The map also contains a partial robot identification chart.
Robot types
There are a number of different robot types in Cholo, nearly all of which can be controlled at some point in the game.
Vidbot - Fixed position camera robot
Leadcoat - Heavy duty radiation proof robot
Ratdroid - Diagnostic robot
Hacker - Hacker robot (unarmed)
Flying Eye - Mobile camera robot
Autodoc - Maintenance robot
Guard - Police robot
Grundon - Police tank robot
Flyboy - Aircraft robot
Ship - Ship robot
Legacy
Ovine by Design published a remake of the game using Tron-like graphics as freeware for Windows, with the approval of the original creators.
References
External links
Cholo box and manual at C64Sets.com
1986 video games
ZX Spectrum games
BBC Micro and Acorn Electron games
Amstrad CPC games
Commodore 64 games
First-person shooters
Post-apocalyptic video games
Video games developed in the United Kingdom
Adventure games
|
34994823
|
https://en.wikipedia.org/wiki/OpenLisp
|
OpenLisp
|
OpenLisp is a programming language in the Lisp family developed by Christian Jullien from Eligis. It conforms to the international standard for ISLISP published jointly by the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), ISO/IEC 13816:1997(E), revised to ISO/IEC 13816:2007(E).
Written in the programming languages C and Lisp, it runs on most common operating systems. OpenLisp is designated an ISLISP implementation, but also contains many Common Lisp-compatible extensions (hashtable, readtable, package, defstruct, sequences, rational numbers) and other libraries (network socket, regular expression, XML, Portable Operating System Interface (POSIX), SQL, Lightweight Directory Access Protocol (LDAP)).
OpenLisp includes an interpreter associated to a read–eval–print loop (REPL), a Lisp Assembly Program (LAP) and a backend compiler for the language C.
Goals
The main goal of this Lisp version is to implement a fully compliant ISLISP system (when launched with -islisp flag, it is strictly restricted to ISO/IEC 13816:2007(E) specification). The secondary goal is to provide a complete embeddable Lisp system linkable to C/C++ or Java (via Java Native Interface (JNI)). A callback mechanism is used to communicate with the external program. Other goals are to be usable as scripting language or glue language and to produce standalone program executables.
License
Despite its name, OpenLisp is proprietary software. Its interpreter is available free of charge for any noncommercial use.
User interface
OpenLisp mainly runs in console mode: cmd.exe on Microsoft Windows, and terminal emulator on Unix-based systems.
;; OpenLisp v11.x.y (Build: XXXX) by C. Jullien [Jan 01 20xx - 10:49:13]
;; Copyright (c) Eligis - 1988-20xx.
;; System 'sysname' (64-bit, 8 CPU) on 'hostname', ASCII.
;; God thank you, OpenLisp is back again!
? (fib 20)
;; elapsed time = 0.003s, (0 gc).
= 6765
? _
Alternate solutions include running OpenLisp from Emacs via setting up Emacs inferior-lisp-mode, or using an integrated development environment (IDE) which supports OpenLisp syntax. LispIDE by DaanSystems does so natively.
Technology
Memory manager
Internally, OpenLisp uses virtual memory to allocate and extend objects automatically. Small objects of the same type are allocated using a Bibop (BIg Bag Of Pages) memory organization. Large objects use a proxy which point to the real object in Lisp heap. The conservative garbage collection is a mark and sweep with coalescing heap (sweep phase can be configured to use threads).
Data types
OpenLisp uses tagged architecture (4 bits tag on 32-bit, 5 bits tag on 64-bit) for fast type checking (small integer, float, symbol, cons, string, vector). Small integers (28 bits on 32-bit, 59 bits on 64-bit) are unboxed, large (32/64-bit) integers are boxed. As required by ISLISP, arbitrary-precision arithmetic (bignums) are also implemented. Characters (hence strings) are either 8-bit (ANSI, EBCDIC) or 16/32-bit if Unicode support is enabled.
Evaluator and compiler
The Lisp Kernel, native interpreter and basic libraries are hand coded in the language C, LAP intermediate language produced by the compiler is then translated to C by the C backend code generator.
History
In 1988, the very first motive behind OpenLisp was to implement a Lisp subset to extend EmACT, an Emacs clone. ISLISP became an obvious choice quickly. Further development ensued.
Ports
OpenLisp claims to be extremely portable, it runs on many operating systems including: Windows, most Unix and POSIX based (Linux, macOS, FreeBSD, OpenBSD, NetBSD, Solaris, HP-UX, AIX, Cygwin, QNX), DOS, OS/2, Pocket PC, OpenVMS, z/OS. The official website download section contains over 50 different versions.
Standard libraries
Connectors
OpenLisp can interact with modules written in C using foreign function interface (FFI), ISLISP streams are extended to support network socket (./net directory includes samples for Hypertext Transfer Protocol (http), JavaScript Object Notation (JSON), Post Office Protocol 3 (POP3), Simple Mail Transfer Protocol (SMTP), Telnet, Rss), a simplified Extensible Markup Language (XML) reader can convert XML to Lisp. A basic SQL module can be used with MySQL, Odbc, SQLite, PostgreSQL. A comma-separated values (CSV) module can read and write CSV files.
Tools
Developer tools include data logging, pretty-printer, profiler, design by contract programming, and unit tests.
Algorithms
Some well known algorithms are available in ./contrib directory (Dantzig's simplex algorithm, Dijkstra's algorithm, Ford–Fulkerson algorithm). Modules are shipped using BSD licenses.
Origin of name
The prefix Open refers to open systems not to the open-source model.
The name was chosen in 1993 to replace the MLisp internal code name which was already used by Gosling Emacs (as successor of Mocklisp).
OpenLisp programming language is different than OpenLISP, a project begun in 1997 to implement Locator/Identifier Separation Protocol.
Compiler
This section describes how a compiler transforms Lisp code to C.
Source code
The Fibonacci number function (this classic definition used in most benchmarks is not the most efficient way to compute fib)
(defun fib (n)
(cond ((eq n 1) 1)
((eq n 2) 1)
(t (+ (fib (- n 1)) (fib (- n 2))))))
LAP intermediate code
Lisp compiler translates Lisp source code to the following intermediate code. It is followed by a peephole optimization pass that uses this intermediate format to analyze and optimize instructions.
After optimization, final LAP code is:
((fentry fib 1 0 0)
(param 0)
(jeq _l004 '1)
(jneq _l003 '2)
(move a1 '1)
(return)
_l003
(gsub1 a1)
(recurse 1)
(move a2 a1)
(param 0)
(gsub a1 '2)
(recurse 1)
(gadd a2 a1)
_l004
(return)
(end))
C code translation
Finally, C code generator uses LAP code to translate instructions in C.
static POINTER
OLDEFCOMPILED1(olfib_00, p1) {
POINTER a1;
POINTER VOLATILE a2;
ollapenter(SN_OLFIB_00);
a1 = p1;
if (eq(a1, olmakefix(1))) goto _l004;
if (!eq(a1, olmakefix(2))) goto _l003;
ollapleave(SN_OLFIB_00);
return olmakefix(1);
_l003:
a1 = ollapgsub(a1, olmakefix(1));
a2 = olfib_00(a1);
a1 = ollapgsub(p1, olmakefix(2));
a1 = olfib_00(a1);
a1 = ollapgadd(a2, a1);
_l004:
ollapleave(SN_OLFIB_00);
return a1;
}
Style guide
Line length
OpenLisp accepts lines having unlimited length. The recommended style is that each line of text in code should have at most 80 characters per line.
Adoption
It has been chosen by SDF Public Access Unix System nonprofit public access Unix systems on the Internet as one of its programming languages available online.
Bricsys uses OpenLisp to implement AutoLISP in its Bricscad computer-aided design (CAD) system.
MEVA is entirely written with OpenLisp.
Università degli Studi di Palermo uses OpenLisp to teach Lisp.
References
External links
ISLISP on Software Preservation Group
Lisp programming language family
Lisp (programming language)
Programming languages created in 1988
|
42906494
|
https://en.wikipedia.org/wiki/Time-Sensitive%20Networking
|
Time-Sensitive Networking
|
Time-Sensitive Networking (TSN) is a set of standards under development by the Time-Sensitive Networking task group of the IEEE 802.1 working group. The TSN task group was formed in November 2012 by renaming the existing Audio Video Bridging Task Group and continuing its work. The name changed as a result of the extension of the working area of the standardization group. The standards define mechanisms for the time-sensitive transmission of data over deterministic Ethernet networks.
The majority of projects define extensions to the IEEE 802.1Q Bridges and Bridged Networks, which describes Virtual LANs and network switches. These extensions in particular address the transmission of very low transmission latency and high availability. Applications include converged networks with real-time Audio/Video Streaming and real-time control streams which are used in automotive or industrial control facilities.
Background
Standard Information technology network equipment has no concept of “time” and cannot provide synchronization and precision timing. Delivering data reliably is more important than delivering within a specific time, so there are no constraints on delay or synchronization precision. Even if the average hop delay is very low, individual delays can be unacceptably high. Network congestion is handled by throttling and retransmitting dropped packets at the transport layer, but there are no means to prevent congestion at the link layer. Data can be lost when the buffers are too small or the bandwidth is insufficient, but excessive buffering adds to the delay, which is unacceptable when low deterministic delays are required.
The different AVB/TSN standards documents specified by IEEE 802.1 can be grouped into three basic key component categories that are required for a complete real-time communication solution based on switched Ethernet networks with deterministic quality of service (QoS) for point-to-point connections. Each and every standard specification can be used on its own and is mostly self-sufficient. However, only when used together in a concerted way, TSN as a communication system can achieve its full potential. The three basic components are:
Time synchronization: All devices that are participating in real-time communication need to have a common understanding of time
Scheduling and traffic shaping: All devices that are participating in real-time communication adhere to the same rules in processing and forwarding communication packets
Selection of communication paths, path reservations and fault-tolerance: All devices that are participating in real-time communication adhere to the same rules in selecting communication paths and in reserving bandwidth and time slots, possibly utilizing more than one simultaneous path to achieve fault-tolerance
Applications which need a deterministic network that behaves in a predictable fashion include audio and video, initially defined in Audio Video Bridging (AVB); control networks that accept inputs from sensors, perform control loop processing, and initiate actions; safety-critical networks that implement packet and link redundancy; and mixed media networks that handle data with varying levels of timing sensitivity and priority, such as vehicle networks that support climate control, infotainment, body electronics, and driver assistance. The IEEE AVB/TSN suite serves as the foundation for deterministic networking to satisfy the common requirements of these applications.
AVB/TSN can handle rate-constrained traffic, where each stream has a bandwidth limit defined by minimum inter-frame intervals and maximal frame size, and time-trigger traffic with an exact accurate time to be sent. Low-priority traffic is passed on best-effort base, with no timing and delivery guarantees.
Time Synchronization
In contrast to standard Ethernet according to IEEE 802.3 and Ethernet bridging according to IEEE 802.1Q, time is very important in TSN networks. For real-time communication with hard, non-negotiable time boundaries for end-to-end transmission latencies, all devices in this network need to have a common time reference and therefore, need to synchronize their clocks among each other. This is not only true for the end devices of a communication stream, such as an industrial controller and a manufacturing robot, but also true for network components, such as Ethernet switches. Only through synchronized clocks, it is possible for all network devices to operate in unison and execute the required operation at exactly the required point in time.
Although time synchronization in TSN networks can be achieved with GPS clock, this is costly and there is no guarantee that the endpoint device has access to the radio or satellite signal at all times. Due to these constraints, time in TSN networks is usually distributed from one central time source directly through the network itself using the IEEE 1588 Precision Time Protocol, which utilizes Ethernet frames to distribute time synchronization information. IEEE 802.1AS is a tightly constrained subset of IEEE 1588 with sub-microsecond precision and extensions to support synchronisation over WiFi radio (IEEE 802.11). The idea behind this profile is to narrow the huge list of different IEEE 1588 options down to a manageable few critical options that are applicable to home networks or networks in automotive or industrial automation environments.
IEEE 802.1AS Timing and Synchronization for Time-Sensitive Applications
IEEE 802.1AS-2011 defines the Generic Precision Time Protocol (gPTP) profile which employs UDP messages to establish a hierarchy of clocks and synchronize time in a gPTP domain formed by devices exchanging time events.
To account for data path delays, the gPTP protocol measures the frame residence time within each bridge (a time required for processing, queuing and transmission from ingress to egress ports), and the link latency of each hop (a propagation delay between two adjacent bridges). The calculated delays are then referenced to the GrandMaster (GM) clock in a bridge elected by the Best Master Clock Algorithm, a clock spanning tree protocol to which all from Clock Master (CM) and endpoint devices have to synchronize. Any device which does not synchronize to timing messages is outside of the timing domain boundaries (Figure 2).
Synchronization accuracy depends on precise measurements of link delay and frame residence time. 802.1AS uses 'logical syntonization', where a ratio between local clock and GM clock oscillator frequencies is used to calculate synchronized time, and a ratio between local and CM clock frequencies to calculate propagation delay.
IEEE802.1AS-2020 introduces improved time measurement accuracy and support for multiple time domains for redundancy.
Scheduling and traffic shaping
Scheduling and traffic shaping allows for the coexistence of different traffic classes with different priorities on the same network - each with different requirements to available bandwidth and end-to-end latency.
Traffic shaping refers to the process of distributing frames/packets evenly in time to smooth out the traffic. Without traffic shaping at sources and bridges, the packets will "bunch", i.e. agglomerate into bursts of traffic, overwhelming the buffers in subsequent bridges/switches along the path.
Standard bridging according to IEEE 802.1Q uses a strict priority scheme with eight distinct priorities. On the protocol level, these priorities are visible in the Priority Code Point (PCP) field in the 802.1Q VLAN tag of a standard Ethernet frame. These priorities already distinguish between more important and less important network traffic, but even with the highest of the eight priorities, no absolute guarantee for an end-to-end delivery time can be given. The reason for this is buffering effects inside the Ethernet switches. If a switch has started the transmission of an Ethernet frame on one of its ports, even the highest priority frame has to wait inside the switch buffer for this transmission to finish. With standard Ethernet switching, this non-determinism cannot be avoided. This is not an issue in environments where applications do not depend on the timely delivery of single Ethernet frames - such as office IT infrastructures. In these environments, file transfers, emails or other business applications have limited time sensitivity themselves and are usually protected by other mechanisms further up the protocol stack, such as the Transmission Control Protocol. In industrial automation (Programmable Logic Controller (PLC) with an industrial robot) and automotive car environments, where closed loop control or safety applications are using the Ethernet network, reliable and timely delivery is of utmost importance. AVB/TSN enhances standard Ethernet communication by adding mechanisms to provide different time slices for different traffic classes and ensure timely delivery with soft and hard real-time requirements of control system applications. The mechanism of utilizing the eight distinct VLAN priorities is retained, to ensure complete backward compatibility to non-TSN Ethernet. To achieve transmission times with guaranteed end-to-end latency, one or several of the eight Ethernet priorities can be individually assigned to already existing methods (such as the IEEE 802.1Q strict priority scheduler) or new processing methods, such as the IEEE 802.1Qav credit-based traffic shaper, IEEE 802.1Qbv time-aware shaper, or IEEE 802.1Qcr asynchronous shaper.
Time-sensitive traffic has several priority classes. For credit-based shaper 802.1Qav, Stream Reservation Class A is the highest priority, with a worst-case latency requirement of 2 ms, and maximum transmission period of 125 μs; Class B has the second-highest priority with worst-case latency of 50 ms, and a maximum transmission period of 250 μs. Traffic classes shall not exceed their preconfigured maximum bandwidth (75% for audio and video applications). The maximum number of hops is 7. The per-port peer delay provided by gPTP and the network bridge residence delay are added to calculate the accumulated delays and ensure the latency requirement is met. Control traffic has the third-highest priority and includes gPTP and SRP traffic. Time-aware scheduler 802.1Qbv introduces Class CDT for realtime control data from sensors and command streams to actuators, with worst-case latency of 100 μs over 5 hops, and a maximum transmission period of 0.5 ms. Class CDT takes the highest priority over classes A, B, and control traffic.
AVB credit-based scheduler
IEEE 802.1Qav Forwarding and Queuing Enhancements for Time-Sensitive Streams
IEEE 802.1Qav Forwarding and Queuing Enhancements for Time-Sensitive Streams defines traffic shaping using priority classes, which is based on a simple form of "leaky bucket" credit-based fair queuing. 802.1Qav is designed to reduce buffering in receiving bridges and endpoints.
The credit-based shaper defines credits in bits for two separate queues, dedicated to Class A and Class B traffic. Frame transmission is only allowed when credit is non-negative; during transmission the credit decreases at a rate called sendSlope: . The credit increases at a rate idleSlope if frames are waiting for other queues to be transmitted: . Thus the idleSlope is the bandwidth reserved for the queue by the bridge, and the sendSlope is the transmission rate of the port MAC service.
If the credit is negative and no frames are transmitted, credit increases at idleSlope rate until zero is reached. If an AVB frame cannot be transmitted because a non-AVB frame is in transmission, credit accumulates at idleSlope rate but positive credit is allowed.
Additional limits hiCredit and loCredit are derived from the maximum frame size and maximum interference size, the idleSlope/sendSlope, and the maximum port transmission rate.
Reserved AV stream traffic frames are forwarded with high priority over non-reserved Best Effort traffic, subject to credit-based traffic shaping rules which may require them to wait for certain amount of credits. This protects best-effort traffic by limiting maximum AV stream burst. The frames are scheduled very evenly, though only on an aggregate basis, to smooth out the delivery times and reduce bursting and bunching, which can lead to buffer overflows and packet drops that trigger retransmissions. The increased buffering delay makes re-transmitted packets obsolete by the time they arrive, resulting in frame drops which reduces the quality of AV applications.
Though credit-based shaper provides fair scheduling for low-priority packets and smooths out traffic to eliminate congestion, unfortunately, average delay increases up to 250 μs per hop, which is too high for control applications, whereas a time-aware shaper (IEEE 802.1Qbv) has a fixed cycle delay from 30 μs to several milliseconds, and typical delay of 125 μs. Deriving guaranteed upper bounds on delays in TSN in non-trivial and is currently being researched, e.g., by using the mathematical framework Network Calculus.
IEEE 802.1Qat Stream Reservation Protocol
IEEE 802.1Qat Stream Reservation Protocol (SRP) is a distributed peer-to-peer protocol that specifies admission controls based on resource requirements of the flow and available network resources.
SRP reserves resources and advertises streams from the sender/source (talker) to the receivers/destinations (listeners); it works to satisfy QoS requirements for each stream and guarantee the availability of sufficient network resources along the entire flow transmission path.
The traffic streams are identified and registered with a 64-bit StreamID, made up of the 48-bit MAC address (EUI) and 16-bit UniqueID to identify different streams from the one source.
SRP employs variants of Multiple Registration Protocol (MRP) to register and de-register attribute values on switches/bridges/devices - the Multiple MAC Registration Protocol (MMRP), the Multiple VLAN Registration Protocol (MVRP), and the Multiple Stream Registration Protocol (MSRP).
The SRP protocol essentially works in the following sequence:
Advertise a stream from a talker
Register the paths along data flow
Calculate worst-case latency
Create an AVB domain
Reserve the bandwidth
Resources are allocated and configured in both the end nodes of the data stream and the transit nodes along the data flow path, with an end-to-end signaling mechanism to detect the success/failure. Worst-case latency is calculated by querying every bridge.
Reservation requests use the general MRP application with MRP attribute propagation mechanism. All nodes along the flow path pass the MRP Attribute Declaration (MAD) specification which describes the stream characteristics so that bridges could allocate the necessary resources.
If a bridge is able to reserve the required resources, it propagates the advertisement to the next bridge; otherwise, a 'talker failed' message is raised. When the advertise message reaches the listener, it replies with 'listener ready' message that propagates back to the talker.
Talker advertise and listener ready messages can be-de-registered, which terminates the stream.
Successful reservation is only guaranteed when all intermediate nodes support SRP and respond to advertise and ready messages; in Figure 2 above, AVB domain 1 is unable to connect with AVB domain 2.
SRP is also used by TSN/AVB standards for frame priorities, frame scheduling, and traffic shaping
Enhancements to AVB scheduling
IEEE 802.1Qcc Enhancements to SRP
SRP uses decentralized registration and reservation procedure, multiple requests can introduce delays for critical traffic. IEEE 802.1Qcc-2018 "Stream Reservation Protocol (SRP) Enhancements and Performance Improvements" amendment reduces the size of reservation messages and redefines timers so they trigger updates only when link state or reservation is changed. To improve TSN administration on large scale networks, each User Network Interface (UNI) provides methods for requesting Layer 2 services, supplemented by Centralized Network Configuration (CNC) to provide centralized reservation and scheduling, and remote management using NETCONF/RESTCONF protocols and IETF YANG/NETCONF data modeling.
CNC implements a per-stream request-response model, where SR class is not explicitly used: end-stations send requests for a specific stream (via edge port) without knowledge of the network configuration, and CNC performs steam reservation centrally. MSRP only runs on the link to end-stations as an information carrier between CNC and end-stations, not for stream reservation. Centralized User Configuration (CUC) is an optional node that discovers end stations, their capabilities and user requirements, and configures delay-optimized TSN features (for closed-loop IACS applications). Seamless interop with Resource Reservation Protocol (RSVP) transport is provided.
802.1Qcc allows centralized configuration management to coexist with decentralized, fully distributed configuration of the SRP protocol, and also supports hybrid configurations for legacy AVB devices.
802.1Qcc can be combined with IEEE 802.1Qca Path Control and Reservation (PCR) and TSN traffic shapers.
IEEE 802.1Qch Cyclic Queuing and Forwarding (CQF)
While the 802.1Qav FQTSS/CBS works very well with soft real-time traffic, worst-case delays are both hop count and network topology dependent. Pathological topologies introduce delays, so buffer size requirements have to consider network topology.
IEEE 802.1Qch Cyclic Queuing and Forwarding (CQF), also known as the Peristaltic Shaper (PS), introduces double buffering which allows bridges to synchronize transmission (frame enqueue/dequeue operations) in a cyclic manner, with bounded latency depending only on the number of hops and the cycle time, completely independent of the network topology.
CQF can be used with the IEEE 802.1Qbv time-aware scheduler, IEEE 802.1Qbu frame preemption, and IEEE 802.1Qci ingress traffic policing.
IEEE 802.1Qci Per-Stream Filtering and Policing (PSFP)
IEEE 802.1Qci Per-Stream Filtering and Policing (PSFP) improves network robustness by filtering individual traffic streams. It prevents traffic overload conditions that may affect bridges and the receiving endpoints due to malfunction or Denial of Service (DoS) attacks.
The stream filter uses rule matching to allow frames with specified stream IDs and priority levels and apply policy actions otherwise. All streams are coordinated at their gates, similarly to the 802.1Qch signaling.
The flow metering applies predefined bandwidth profiles for each stream.
TSN scheduling and traffic shaping
IEEE 802.1Qbv Enhancements to Traffic Scheduling: Time-Aware Shaper (TAS)
The IEEE 802.1Qbv time-aware scheduler is designed to separate the communication on the Ethernet network into fixed length, repeating time cycles. Within these cycles, different time slices can be configured that can be assigned to one or several of the eight Ethernet priorities. By doing this, it is possible to grant exclusive use - for a limited time - to the Ethernet transmission medium for those traffic classes that need transmission guarantees and can't be interrupted. The basic concept is a time-division multiple access (TDMA) scheme. By establishing virtual communication channels for specific time periods, time-critical communication can be separated from non-critical background traffic.
Time-aware scheduler introduces Stream Reservation Class CDT for time-critical control data, with worst-case latency of 100 μs over 5 hops, and maximum transmission period of 0.5 ms, in addition to classes A and B defined for IEEE 802.1Qav credit-based traffic shaper. By granting exclusive access to the transmission medium and devices to time-critical traffic classes, the buffering effects in the Ethernet switch transmission buffers can be avoided and time-critical traffic can be transmitted without non-deterministic interruptions. One example for an IEEE 802.1Qbv scheduler configuration is visible in figure 1:
In this example, each cycle consists of two time slices. Time slice 1 only allows the transmission of traffic tagged with VLAN priority 3, and time slice 2 in each cycle allows for the rest of the priorities to be sent. Since the IEEE 802.1Qbv scheduler requires all clocks on all network devices (Ethernet switches and end devices) to be synchronized and the identical schedule to be configured, all devices understand which priority can be sent to the network at any given point in time. Since time slice 2 has more than one priority assigned to it, within this time slice, the priorities are handled according to standard IEEE 802.1Q strict priority scheduling.
This separation of Ethernet transmissions into cycles and time slices can be enhanced further by the inclusion of other scheduling or traffic shaping algorithms, such as the IEEE 802.1Qav credit-based traffic shaper. IEEE 802.1Qav supports soft real-time. In this particular example, IEEE 802.1Qav could be assigned to one or two of the priorities that are used in time slice two to distinguish further between audio/video traffic and background file transfers. The Time-Sensitive Networking Task Group specifies a number of different schedulers and traffic shapers that can be combined to achieve the nonreactive coexistence of hard real-time, soft real-time and background traffic on the same Ethernet infrastructure.
IEEE 802.1Qbv in more detail: Time slices and guard bands
When an Ethernet interface has started the transmission of a frame to the transmission medium, this transmission has to be completely finished before another transmission can take place. This includes the transmission of the CRC32 checksum at the end of the frame to ensure a reliable, fault-free transmission. This inherent property of Ethernet networks - again- poses a challenge to the TDMA approach of the IEEE 802.1Qbv scheduler. This is visible in figure 2:
Just before the end of time slice 2 in cycle n, a new frame transmission is started. Unfortunately, this frame is too large to fit into its time slice. Since the transmission of this frame cannot be interrupted, the frame infringes the following time slice 1 of the next cycle n+1. By partially or completely blocking a time-critical time slice, real-time frames can be delayed up to the point where they cannot meet the application requirements any longer. This is very similar to the actual buffering effects that happen in non-TSN Ethernet switches, so TSN has to specify a mechanism to prevent this from happening.
The IEEE 802.1Qbv time-aware scheduler has to ensure that the Ethernet interface is not busy with the transmission of a frame when the scheduler changes from one-time slice into the next. The time-aware scheduler achieves this by putting a guard band in front of every time slice that carries time-critical traffic. During this guard band time, no new Ethernet frame transmission may be started, only already ongoing transmissions may be finished. The duration of this guard band has to be as long as it takes the maximum frame size to be safely transmitted. For an Ethernet frame according to IEEE 802.3 with a single IEEE 802.1Q VLAN tag and including interframe spacing, the total length is: 1500 byte (frame payload) + 18 byte (ethernet addresses, EtherType and CRC) + 4 byte (VLAN Tag) + 12 byte (Interframe spacing) + 8 byte (preamble and SFD) = 1542 byte.
The total time needed for sending this frame is dependent on the link speed of the Ethernet network. With Fast Ethernet and 100 Mbit/s transmission rate, the transmission duration is as follows:
In this case, the guard band has to be at least 123.36 µs long. With the guard band, the total bandwidth or time that is usable within a time slice is reduced by the length of the guard band. This is visible in figure 3
Note: to facilitate the presentation of the topic, the actual size of the guard band in figure 3 is not to scale, but is significantly smaller than indicated by the frame in figure 2.
In this example, the time slice 1 always contains high priority data (e.g. for motion control), while time slice 2 always contains best-effort data. Therefore, a guard band needs to be placed at every transition point into time slice 1 to protect the time slice of the critical data stream(s).
While the guard bands manage to protect the time slices with high priority, critical traffic, they also have some significant drawbacks:
The time that is consumed by a guard band is lost - it cannot be used to transmit any data, as the Ethernet port needs to be silent. Therefore, the lost time directly translates in lost bandwidth for background traffic on that particular Ethernet link.
A single time slice can never be configured smaller than the size of the guard band. Especially with lower speed Ethernet connections and growing guard band size, this has a negative impact on the lowest achievable time slice length and cycle time.
To partially mitigate the loss of bandwidth through the guard band, the standard IEEE 802.1Qbv includes a length-aware scheduling mechanism. This mechanism is used when store-and-forward switching is utilized: after the full reception of an Ethernet frame that needs to be transmitted on a port where the guard band is in effect, the scheduler checks the overall length of the frame. If the frame can fit completely inside the guard band, without any infringement of the following high priority slice, the scheduler can send this frame, despite an active guard band, and reduce the waste of bandwidth. This mechanism, however, cannot be used when cut-through switching is enabled, since the total length of the Ethernet frame needs to be known a priori. Therefore, when cut-through switching is used to minimize end-to-end latency, the waste of bandwidth will still occur. Also, this does not help with the minimum achievable cycle time. Therefore, length-aware scheduling is an improvement, but cannot mitigate all drawbacks that are introduced by the guard band.
IEEE 802.3br and 802.1Qbu Interspersing Express Traffic (IET) and Frame Preemption
To further mitigate the negative effects from the guard bands, the IEEE working groups 802.1 and 802.3 have specified the frame pre-emption technology. The two working groups collaborated in this endeavour since the technology required both changes in the Ethernet Media Access Control (MAC) scheme that is under the control of IEEE 802.3, as well as changes in the management mechanisms that are under the control of IEEE 802.1. Due to this fact, frame pre-emption is described in two different standards documents: IEEE 802.1Qbu for the bridge management component and IEEE 802.3br for the Ethernet MAC component.
Frame preemption defines two MAC services for an egress port, preemptable MAC (pMAC) and express MAC (eMAC). Express frames can interrupt transmission of preemptable frames. On resume, MAC merge sublayer re-assembles frame fragments in the next bridge.
Preemption causes computational overhead in the link interface, as the operational context shall be transitioned to the express frame.
Figure 4 gives a basic example how frame pre-emption works. During the process of sending a best effort Ethernet frame, the MAC interrupts the frame transmission just before the start of the guard band. The partial frame is completed with a CRC and will be stored in the next switch to wait for the second part of the frame to arrive. After the high priority traffic in time slice 1 has passed and the cycle switches back to time slice 2, the interrupted frame transmission is resumed. Frame pre-emption always operates on a pure link-by-link basis and only fragments from one Ethernet switch to the next Ethernet switch, where the frame is reassembled. In contrast to fragmentation with the Internet Protocol (IP), no end-to-end fragmentation is supported.
Each partial frame is completed by a CRC32 for error detection. In contrast to the regular Ethernet CRC32, the last 16 bits are inverted to make a partial frame distinguishable from a regular Ethernet frame. In addition, also the start of frame delimiter (SFD) is changed.
The support for frame pre-emption has to be activated on each link between devices individually. To signal the capability for frame pre-emption on a link, an Ethernet switch announces this capability through the LLDP (Link Layer Discovery Protocol). When a device receives such an LLDP announcement on a network port and supports frame pre-emption itself, it may activate the capability. There is no direct negotiation and activation of the capability on adjacent devices. Any device that receives the LLDP pre-emption announcement assumes that on the other end of the link, a device is present that can understand the changes in the frame format (changed CRC32 and SFD).
Frame pre-emption allows for a significant reduction of the guard band. The length of the guard band is now dependent on the precision of the frame pre-emption mechanism: how small is the minimum size of the frame that the mechanism can still pre-empt. IEEE 802.3br specifies the best accuracy for this mechanism at 64 byte - due to the fact that this is the minimum size of a still valid Ethernet frame. In this case, the guard band can be reduced to a total of 127 byte: 64 byte (minimum frame) + 63 byte (remaining length that cannot be pre-empted). All larger frames can be pre-empted again and therefore, there is no need to protect against this size with a guard band.
This minimizes the best effort bandwidth that is lost and also allows for much shorter cycle times at slower Ethernet speeds, such as 100 Mbit/s and below. Since the pre-emption takes place in hardware in the MAC, as the frame passes through, cut-through switching can be supported as well, since the overall frame size is not needed a priori. The MAC interface just checks in regular 64 byte intervals whether the frame needs to be pre-empted or not.
The combination of time synchronization, the IEEE 802.1Qbv scheduler and frame pre-emption already constitutes an effective set of standards that can be utilized to guarantee the coexistence of different traffic categories on a network while also providing end-to-end latency guarantees. This will be enhanced further as new IEEE 802.1 specifications, such as 802.1Qch are finalized.
Shortcomings of IEEE 802.1Qbv/bu
Overall, the time-aware scheduler has high implementation complexity and its use of bandwidth is not efficient.
Task and event scheduling in endpoints has to be coupled with the gate scheduling of the traffic shaper in order to lower the latencies.
A critical shortcoming is some delay incurred when an end-point streams unsynchronized data, due to the waiting time for the next time-triggered window.
The time-aware scheduler requires tight synchronization of its time-triggered windows, so all bridges on the stream path must be synchronized. However synchronizing TSN bridge frame selection and transmission time is nontrivial even in moderately sized networks and requires a fully managed solution.
Frame preemption is hard to implement and has not seen wide industry support.
IEEE 802.1Qcr Asynchronous Traffic Shaping
Credit-based, time-aware and cyclic (peristaltic) shapers require network-wide coordinated time and utilize network bandwidth inefficiently, as they enforce packet transmission at periodic cycles. The IEEE 802.1Qcr Asynchronous Traffic Shaper (ATS) operates asynchronously based on local clocks in each bridge, improving link utilization for mixed traffic types, such as periodic with arbitrary periods, sporadic (event driven), and rate-constrained.
ATS employs the urgency-based scheduler (UBS) which prioritizes urgent traffic using per-class queuing and per-stream reshaping. Asynchronicity is achieved by interleaved shaping with traffic characterization based on Token Bucket Emulation, a token bucket emulation model, to eliminate the burstiness cascade effects of per-class shaping. The TBE shaper controls the traffic by average transmission rate, but allows a certain level of burst traffic. When there is a sufficient number of tokens in the bucket, transmission starts immediately; otherwise the queue's gate closes for the time needed to accumulate enough tokens.
The UBS is an improvement on Rate-Controlled Service Disciplines (RCSDs) to control selection and transmission of each individual frame at each hop, decoupling stream bandwidth from the delay bound by separation of rate control and packet scheduling, and using static priorities and First Come - First Serve and Earliest Due - Date First queuing.
UBS queuing has two levels of hierarchy: per-flow shaped queues, with fixed priority assigned by the upstream sources according to application-defined packet transmission times, allowing arbitrary transmission period for each stream, and shared queues that merge streams with the same internal priority from several shapers. This separation of queuing has low implementation complexity while ensuring that frames with higher priority will bypass the lower priority frames.
The shared queues are highly isolated, with policies for separate queues for frames from different transmitters, the same transmitter but different priority, and the same transmitter and priority but a different priority at the receiver. Queue isolation prevents propagation of malicious data, assuring that ordinary streams will get no interference, and enables flexible stream or transmitter blocking by administrative action.
The minimum number of shared queues is the number of ports minus one, and more with additional isolation policies. Shared queues have scheduler internal fixed priority, and frames are transmitted on the First Come First Serve principle.
Worst case clock sync inaccuracy does not decrease link utilization, contrary to time-triggered approaches such as TAS (Qbv) and CQF (Qch).
Selection of communication paths and fault-tolerance
IEEE 802.1Qca Path Control and Reservation (PCR)
IEEE 802.1Qca Path Control and Reservation (PCR) specifies extensions to the Intermediate Station to Intermediate Station (IS-IS) protocol to configure multiple paths in bridged networks.
The IEEE 802.1Qca standard uses Shortest Path Bridging (SPB) with a software-defined networking (SDN) hybrid mode - the IS-IS protocol handles basic functions, while the SDN controller manages explicit paths using Path Computation Elements (PCEs) at dedicated server nodes.
IEEE 802.1Qca integrates control protocols to manage multiple topologies, configure an explicit forwarding path (a predefined path for each stream), reserve bandwidth, provides data protection and redundancy, and distribute flow synchronization and flow control messages. These are derived from Equal Cost Tree (ECT), Multiple Spanning Tree Instance (MSTI) and Internal Spanning Tree (IST), and Explicit Tree (ET) protocols.
IEEE 802.1CB Frame Replication and Elimination for Reliability (FRER)
IEEE 802.1CB Frame Replication and Elimination for Reliability (FRER) sends duplicate copies of each frame over multiple disjoint paths, to provide proactive seamless redundancy for control applications that cannot tolerate packet losses.
The packet replication can use traffic class and path information to minimize network congestion. Each replicated frame has a sequence identification number, used to re-order and merge frames and to discard duplicates.
FRER requires centralized configuration management and needs to be used with 802.1Qcc and 802.1Qca. Industrial fault-tolerance HSR and PRP specified in IEC 62439-3 are supported.
Current projects
IEEE 802.1CS Link-Local Registration Protocol
MRP state data for a stream takes 1500 bytes. With additional traffic streams and larger networks, the size of the database proportionally increases and MRP updates between bridge neighbors significantly slow down. The Link-Local Registration Protocol (LRP) is optimized for a larger database size of about 1 Mbyte with efficient replication that allows incremental updates. Unresponsive nodes with stale data are automatically discarded. While MRP is application specific, with each registered application defining its own set of operations, LRP is application neutral.
IEEE 802.1Qdd Resource Allocation Protocol
SRP and MSRP are primarily designed for AV applications - their distributed configuration model is limited to Stream Reservation (SR) Classes A and B defined by the Credit-Based Shaper (CBS), whereas IEEE 802.1Qcc includes a more centralized CNC configuration model supporting all new TSN features such as additional shapers, frame preemption, and path redundancy.
IEEE P802.1Qdd project updates the distributed configuration model by defining new peer-to-peer Resource Allocation Protocol signaling built upon P802.1CS Link-local Registration Protocol. RAP will improve scalability and provide dynamic reservation for a larger number of streams with support for redundant transmission over multiple paths in 802.1CB FRER, and autoconfiguration of sequence recovery.
RAP supports the 'topology-independent per-hop latency calculation' capability of TSN shapers such as 802.1Qch Cyclic Queuing and Forwarding (CQF) and P802.1Qcr Asynchronous Traffic Shaping (ATS). It will also improve performance under high load and support proxying and enhanced diagnostics, all while maintaining backward compatibility and interoperability with MSRP.
IEEE 802.1ABdh Link Layer Discovery Protocol v2
IEEE P802.1ABdh Station and Media Access Control Connectivity Discovery - Support for Multiframe Protocol Data Units (LLDPv2) updates LLDP to support IETF Link State Vector Routing protocol and improve efficiency of protocol messages.
YANG Data Models
The IEEE 802.1Qcp standard implements the YANG data model to provide a Universal Plug-and-Play (uPnP) framework for status reporting and configuration of equipment such as Media Access Control (MAC) Bridges, Two-Port MAC Relays (TPMRs), Customer Virtual Local Area Network (VLAN) Bridges, and Provider Bridges, and to support the 802.1X Security and 802.1AX Datacenter Bridging standards.
YANG is a Unified Modeling Language (UML) for configuration and state data, notifications, and remote procedure calls, to set up device configuration with network management protocols such as NETCONF/RESTCONF.
DetNet
The IETF Deterministic Networking (DetNet) Working Group is focusing to define deterministic data paths with high reliability and bounds on latency, loss, and packet delay variation (jitter), such as audio and video streaming, industrial automation, and vehicle control.
The goals of Deterministic Networking are to migrate time-critical, high-reliability industrial and audio-video applications from special-purpose Fieldbus networks to IP packet networks. To achieve these goals, DetNet uses resource allocation to manage buffer sizes and transmission rates in order to satisfy end-to-end latency requirements. Service protection against failures with redundancy over multiple paths and explicit routes to reduce packet loss and reordering. The same physical network shall handle both time-critical reserved traffic and regular best-effort traffic, and unused reserved bandwidth shall be released for best-effort traffic.
DetNet operates at the IP Layer 3 routed segments using a Software-Defined Networking layer to provide IntServ and DiffServ integration, and delivers services over lower Layer 2 bridged segments using technologies such as MPLS and IEEE 802.1 AVB/TSN.
Traffic Engineering (TE) routing protocols translate DetNet flow specification to AVB/TSN controls for queuing, shaping, and scheduling algorithms, such as IEEE 802.1Qav credit-based shaper, IEEE802.1Qbv time-triggered shaper with a rotating time scheduler, IEEE802.1Qch synchronized double buffering, 802.1Qbu/802.3br Ethernet packet pre-emption, and 802.1CB frame replication and elimination for reliability. Also protocol interworking defined by IEEE 802.1CB is used to advertise TSN sub-network capabilities to DetNet flows via the Active Destination MAC and VLAN Stream identification functions. DetNet flows are matched by destination MAC address, VLAN ID and priority parameters to Stream ID and QoS requirements for talkers and listeners in the AVB/TSN sub-network.
Standards
Related projects:
References
External links
IEEE 802.1 Time-Sensitive Networking Task Group
IEEE 802.1 public document archive
Real-time Ethernetredefined
Time Sensitive Networking (TSN) Vision: Unifying Business & Industrial Automation
Is TSN Activity Igniting Another Fieldbus WAR?
research project related to TSN applications in aircraft
Quick Start Guide for visualizing TSN
TSN Training, Marc Boyer (ONERA), Pierre Julien Chaine (Airbus Defence and Space)
IEEE standards
Ethernet
Industrial Ethernet
Control engineering
Audio engineering
Automotive electronics
Network protocols
|
2515655
|
https://en.wikipedia.org/wiki/DisplayPort
|
DisplayPort
|
DisplayPort (DP) is a digital display interface developed by a consortium of PC and chip manufacturers and standardized by the Video Electronics Standards Association (VESA). It is primarily used to connect a video source to a display device such as a computer monitor. It can also carry audio, USB, and other forms of data.
DisplayPort was designed to replace VGA, FPD-Link, and Digital Visual Interface (DVI). It is backward compatible with other interfaces, such as HDMI and DVI, through the use of either active or passive adapters.
It is the first display interface to rely on packetized data transmission, a form of digital communication found in technologies such as Ethernet, USB, and PCI Express. It permits the use of internal and external display connections. Unlike legacy standards that transmit a clock signal with each output, its protocol is based on small data packets known as micro packets, which can embed the clock signal in the data stream, allowing higher resolution using fewer pins. The use of data packets also makes it extensible, meaning more features can be added over time without significant changes to the physical interface.
DisplayPort can be used to transmit audio and video simultaneously, although each can be transmitted without the other. The video signal path can range from six to sixteen bits per color channel, and the audio path can have up to eight channels of 24-bit, 192kHz uncompressed PCM audio. A bidirectional, half-duplex auxiliary channel carries device management and device control data for the Main Link, such as VESA EDID, MCCS, and DPMS standards. The interface is also capable of carrying bidirectional USB signals.
The interface uses an LVDS signal protocol that is not compatible with DVI or HDMI. However, dual-mode DisplayPort ports are designed to transmit a single-link DVI or HDMI protocol (TMDS) across the interface through the use of an external passive adapter, enabling compatibility mode and converting the signal from 3.3 to 5 volts. For analog VGA/YPbPr and dual-link DVI, a powered active adapter is required for compatibility and does not rely on dual mode. Active VGA adapters are powered directly by the DisplayPort connector, while active dual-link DVI adapters typically rely on an external power source such as USB.
Versions
1.0 to 1.1
The first version, 1.0, was approved by VESA on 3 May 2006. Version 1.1 was ratified on 2 April 2007, and version 1.1a was ratified on 11 January 2008.
DisplayPort 1.0–1.1a allow a maximum bandwidth of 10.8Gbit/s (8.64Gbit/s data rate) over a standard 4-lane main link. DisplayPort cables up to 2 meters in length are required to support the full 10.8Gbit/s bandwidth. DisplayPort 1.1 allows devices to implement alternative link layers such as fiber optic, allowing a much longer reach between source and display without signal degradation, although alternative implementations are not standardized. It also includes HDCP in addition to DisplayPort Content Protection (DPCP). The DisplayPort1.1a standard can be downloaded for free from the VESA website.
1.2
DisplayPort version 1.2 was introduced on 7 January 2010. The most significant improvement of the new version is the doubling of the effective bandwidth to 17.28Gbit/s in High Bit Rate 2 (HBR2) mode, which allows increased resolutions, higher refresh rates, and greater color depth. Other improvements include multiple independent video streams (daisy-chain connection with multiple monitors) called Multi-Stream Transport, facilities for stereoscopic 3D, increased AUX channel bandwidth (from 1Mbit/s to 720Mbit/s), more color spaces including xvYCC, scRGB, and Adobe RGB 1998, and Global Time Code (GTC) for sub 1μs audio/video synchronisation. Also Apple Inc.'s Mini DisplayPort connector, which is much smaller and designed for laptop computers and other small devices, is compatible with the new standard.
1.2a
DisplayPort version 1.2a was released in January 2013 and may optionally include VESA's Adaptive Sync. AMD's FreeSync uses the DisplayPort Adaptive-Sync feature for operation. FreeSync was first demonstrated at CES 2014 on a Toshiba Satellite laptop by making use of the Panel-Self-Refresh (PSR) feature from the Embedded DisplayPort standard, and after a proposal from AMD, VESA later adapted the Panel-Self-Refresh feature for use in standalone displays and added it as an optional feature of the main DisplayPort standard under the name "Adaptive-Sync" in version 1.2a. As it is an optional feature, support for Adaptive-Sync is not required for a display to be DisplayPort 1.2a-compliant.
1.3
DisplayPort version 1.3 was approved on 15 September 2014. This standard increases overall transmission bandwidth to 32.4Gbit/s with the new HBR3 mode featuring 8.1Gbit/s per lane (up from 5.4Gbit/s with HBR2 in version 1.2), for a total data throughput of 25.92Gbit/s after factoring in 8b/10b encoding overhead. This bandwidth is enough for a 4K UHD display () at 120Hz with 24bit/px RGB color, a 5K display () at 60Hz with 30bit/px RGB color, or an 8K UHD display () at 30Hz with 24bit/px RGB color. Using Multi-Stream Transport (MST), a DisplayPort port can drive two 4K UHD () displays at 60Hz, or up to four WQXGA () displays at 60Hz with 24bit/px RGB color. The new standard includes mandatory Dual-mode for DVI and HDMI adapters, implementing the HDMI2.0 standard and HDCP2.2 content protection. The Thunderbolt 3 connection standard was originally to include DisplayPort1.3 capability, but the final release ended up with only version 1.2. The VESA's Adaptive Sync feature in DisplayPort version 1.3 remains an optional part of the specification.
1.4
DisplayPort version 1.4 was published 1 March 2016. No new transmission modes are defined, so HBR3 (32.4Gbit/s) as introduced in version 1.3 still remains as the highest available mode. DisplayPort1.4 adds support for Display Stream Compression 1.2 (DSC), Forward Error Correction, HDR10 metadata defined in CTA-861.3, including static and dynamic metadata and the Rec. 2020 color space, for HDMI interoperability, and extends the maximum number of inline audio channels to 32.
DSC is a compression algorithm that reduces the size of the data stream by up to a 3:1 ratio. Although not mathematically lossless, DSC meets the ISO 29170 standard for "visually lossless" compression in most images, which cannot be distinguished from uncompressed video. Using DSC with HBR3 transmission rates, DisplayPort1.4 can support 8K UHD () at 60Hz or 4K UHD () at 120Hz with 30bit/px RGB color and HDR. 4K at 60Hz 30bit/px RGB/HDR can be achieved without the need for DSC. On displays which do not support DSC, the maximum limits are unchanged from DisplayPort1.3 (4K 120Hz, 5K 60Hz, 8K 30Hz).
1.4a
DisplayPort version 1.4a was published in April 2018. VESA made no official press release for this version. It updated DisplayPort's DSC implementation from DSC 1.2 to 1.2a.
2.0
VESA stated that DP 2.0 is the first major update to the DisplayPort standard since March 2016, and provides up to a ≈3× improvement in data rate (from 25.92 to 77.37Gbit/s) compared to the previous version of DisplayPort (1.4a), as well as new capabilities to address the future performance requirements of traditional displays. These include beyond 8K resolutions, higher refresh rates and high dynamic range (HDR) support at higher resolutions, improved support for multiple display configurations, as well as improved user experience with augmented/virtual reality (AR/VR) displays, including support for 4K-and-beyond VR resolutions.
Products incorporating DisplayPort 2.0 are not projected by VESA to appear on the market until later in 2021.
On 26 June 2019, VESA formally released the DisplayPort 2.0 standard. According to a roadmap published by VESA in September 2016, a new version of DisplayPort was intended to be launched in "early 2017". It would have improved the link rate from 8.1 to 10.0Gbit/s, a 24% increase. This would have increased the total bandwidth from 32.4Gbit/s to 40.0Gbit/s. However, no new version was released in 2017, likely delayed to make further improvements after the HDMI Forum announced in January 2017 that their next standard (HDMI2.1) would offer up to 48Gbit/s of bandwidth. According to a press release on 3 January 2018, "VESA is also currently engaged with its members in the development of the next DisplayPort standard generation, with plans to increase the data rate enabled by DisplayPort by two-fold and beyond. VESA plans to publish this update within the next 18 months." At CES 2019, VESA announced that the new version would support 8K @ 60Hz without compression and was expected to be released in the first half of 2019.
DP 2.0 configuration examples
With the increased bandwidth enabled by DP 2.0, VESA offers a high degree of versatility and configurations for higher display resolutions and refresh rates. In addition to the above-mentioned 8K resolution at 60Hz with HDR support, DP 2.0 across the native DP connector or through USB-C as DisplayPort Alt Mode enables a variety of high-performance configurations:
Single display resolutions
One 16K () display @ 60Hz with 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
One 10K () display @ 60Hz and 8bpc (24bit/px, SDR) RGB/ 4:4:4 color (uncompressed)
Dual display resolutions
Two 8K () displays @ 120Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Two 4K () displays @ 144Hz and 8bpc (24bit/px, SDR) RGB/ 4:4:4 color (uncompressed)
Triple display resolutions
Three 10K () displays @ 60Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Three 4K () displays @ 90Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (uncompressed)
When using only two lanes on the USB-C connector via DP Alt Mode to allow for simultaneous SuperSpeed USB data and video, DP 2.0 can enable such configurations as:
Three 4K () displays @ 144Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Two 4K × 4K () displays (for AR/VR headsets) @ 120Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (with DSC)
Three QHD () @ 120Hz and 8bpc (24bit/px, SDR) RGB/ 4:4:4 color (uncompressed)
One 8K () display @ 30Hz and 10bpc (30bit/px, HDR) RGB/ 4:4:4 color (uncompressed)
Specifications
Main specifications
Main link
The DisplayPort main link is used for transmission of video and audio. The main link consists of a number of unidirectional serial data channels which operate concurrently, called lanes. A standard DisplayPort connection has 4 lanes, though some applications of DisplayPort implement more, such as the Thunderbolt 3 interface which implements up to 8 lanes of DisplayPort.
In a standard DisplayPort connection, each lane has a dedicated set of twisted-pair wires, and transmits data across it using differential signaling. This is a self-clocking system, so no dedicated clock signal channel is necessary. Unlike DVI and HDMI, which vary their transmission speed to the exact rate required for the specific video format, DisplayPort only operates at a few specific speeds; any excess bits in the transmission are filled with "stuffing symbols".
In DisplayPort versions 1.01.4a, the data is encoded using ANSI 8b/10b encoding prior to transmission. With this scheme, only 8 out of every 10 transmitted bits represent data; the extra bits are used for DC balancing (ensuring a roughly equal number of 1s and 0s). As a result, the rate at which data can be transmitted is only 80% of the physical bitrate. The transmission speeds are also sometimes expressed in terms of the "Link Symbol Rate", which is the rate at which these 8b/10b-encoded symbols are transmitted (i.e. the rate at which groups of 10 bits are transmitted, 8 of which represent data). The following transmission modes are defined in version 1.01.4a:
RBR (Reduced Bit Rate): 1.62Gbit/s bandwidth per lane (162MHz link symbol rate)
HBR (High Bit Rate): 2.70Gbit/s bandwidth per lane (270MHz link symbol rate)
HBR2 (High Bit Rate 2): 5.40Gbit/s bandwidth per lane (540MHz link symbol rate), introduced in DP1.2
HBR3 (High Bit Rate 3): 8.10Gbit/s bandwidth per lane (810MHz link symbol rate), introduced in DP1.3
DisplayPort 2.0 uses 128b/132b encoding; each group of 132 transmitted bits represents 128 bits of data. This scheme has an efficiency of 96.%. In addition, forward error correction (FEC) consumes a small amount of the link bandwidth, resulting in an overall efficiency of ≈96.7%. The following transmission modes are added in DP 2.0:
UHBR 10 (Ultra High Bit Rate 10): 10.0Gbit/s bandwidth per lane
UHBR 13.5 (Ultra High Bit Rate 13.5): 13.5Gbit/s bandwidth per lane
UHBR 20 (Ultra High Bit Rate 20): 20.0Gbit/s bandwidth per lane
The total bandwidth of the main link in a standard 4-lane connection is the aggregate of all lanes:
RBR: 4 × 1.62Gbit/s = 6.48Gbit/s bandwidth (data rate of 5.184Gbit/s or 648MB/s with 8b/10b encoding)
HBR: 4 × 2.70Gbit/s = 10.80Gbit/s bandwidth (data rate of 8.64Gbit/s or 1.08GB/s)
HBR2: 4 × 5.40Gbit/s = 21.60Gbit/s bandwidth (data rate of 17.28Gbit/s or 2.16GB/s)
HBR3: 4 × 8.10Gbit/s = 32.40Gbit/s bandwidth (data rate of 25.92Gbit/s or 3.24GB/s)
UHBR 10: 4 × 10.0Gbit/s = 40.00Gbit/s bandwidth (data rate of 38.69Gbit/s or 4.84GB/s with 128b/132b encoding and FEC)
UHBR 13.5: 4 × 13.5Gbit/s = 54.00Gbit/s bandwidth (data rate of 52.22Gbit/s or 6.52GB/s)
UHBR 20: 4 × 20.0Gbit/s = 80.00Gbit/s bandwidth (data rate of 77.37Gbit/s or 9.69GB/s)
The transmission mode used by the DisplayPort main link is negotiated by the source and sink device when a connection is made, through a process called Link Training. This process determines the maximum possible speed of the connection. If the quality of the DisplayPort cable is insufficient to reliably handle HBR2 speeds for example, the DisplayPort devices will detect this and switch down to a lower mode to maintain a stable connection. The link can be re-negotiated at any time if a loss of synchronization is detected.
Audio data is transmitted across the main link during the video blanking intervals (short pauses between each line and frame of video data).
Auxiliary channel
The DisplayPort AUX channel is a half-duplex (bidirectional) data channel used for miscellaneous additional data beyond video and audio, such as EDID (I2C) or CEC commands). This bidirectional data channel is required, since the video lane signals are unidirectional from source to display. AUX signals are transmitted across a dedicated set of twisted-pair wires. DisplayPort1.0 specified Manchester encoding with a 2Mbaud signal rate (1Mbit/s data rate). DisplayPort1.2 introduced a second transmission mode called FAUX (Fast AUX), which operates at 720Mbaud with 8b/10b encoding (576Mbit/s data rate). This can be used to implement additional transport protocols such as USB2.0 (480Mbit/s) without the need for an additional cable.
Cables and connectors
Cables
Compatibility and feature support
All DisplayPort cables are compatible with all DisplayPort devices, regardless of the version of each device or the cable certification level.
All features of DisplayPort will function across any DisplayPort cable. DisplayPort does not have multiple cable designs; all DP cables have the same basic layout and wiring, and will support any feature including audio, daisy-chaining, G-Sync/FreeSync, HDR, and DSC.
DisplayPort cables differ in their transmission speed support. DisplayPort specifies seven different transmission modes (RBR, HBR, HBR2, HBR3, UHBR10, UHBR13.5, and UHBR20) which support progressively higher bandwidths. Not all DisplayPort cables are capable of all seven transmission modes. VESA offers certifications for various levels of bandwidth. These certifications are optional, and not all DisplayPort cables are certified by VESA.
Cables with limited transmission speed are still compatible with all DisplayPort devices, but may place limits on the maximum resolution or refresh rate available.
DisplayPort cables are not classified by "version". Although cables are commonly labeled with version numbers, with HBR2 cables advertised as "DisplayPort1.2 cables" for example, this notation is not permitted by VESA. The use of version numbers with cables can falsely imply that a DisplayPort1.4 display requires a "DisplayPort1.4 cable", or that features introduced in version 1.4 such as HDR or DSC will not function with older "DP1.2 cables". DisplayPort cables are classified only by their bandwidth certification level (RBR, HBR, HBR2, HBR3, etc.), if they have been certified at all.
Cable bandwidth and certifications
Not all DisplayPort cables are capable of functioning at the highest levels of bandwidth. Cables may be submitted to VESA for an optional certification at various bandwidth levels. VESA offers three levels of cable certification: RBR, Standard, and DP8K. These certify DisplayPort cables for proper operation at the following speeds:
In April 2013, VESA published an article stating that the DisplayPort cable certification did not have distinct tiers for HBR and HBR2 bandwidth, and that any certified standard DisplayPort cable—including those certified under DisplayPort1.1—would be able to handle the 21.6Gbit/s bandwidth of HBR2 that was introduced with the DisplayPort 1.2 standard. The DisplayPort1.2 standard defines only a single specification for High Bit Rate cable assemblies, which is used for both HBR and HBR2 speeds, although the DP cable certification process is governed by the DisplayPort PHY Compliance Test Standard (CTS) and not the DisplayPort standard itself.
The DP8K certification was announced by VESA in January 2018, and certifies cables for proper operation at HBR3 speeds (8.1Gbit/s per lane, 32.4Gbit/s total).
In June 2019, with the release of version 2.0 of the DisplayPort Standard, VESA announced that the DP8K certification was also sufficient for the new UHBR 10 transmission mode. No new certifications were announced for the UHBR 13.5 and UHBR 20 modes. VESA is encouraging displays to use tethered cables for these speeds, rather than releasing standalone cables onto the market.
It should also be noted that the use of Display Stream Compression (DSC), introduced in DisplayPort1.4, greatly reduces the bandwidth requirements for the cable. Formats which would normally be beyond the limits of DisplayPort1.4, such as 4K (38402160) at 144Hz 8bpc RGB/ 4:4:4 (31.4Gbit/s data rate when uncompressed), can only be implemented by using DSC. This would reduce the physical bandwidth requirements by 2–3×, placing it well within the capabilities of an HBR2-rated cable.
This exemplifies why DisplayPort cables are not classified by "version"; although DSC was introduced in version 1.4, this does not mean it needs a so-called "DP1.4 cable" (an HBR3-rated cable) to function. HBR3 cables are only required for applications which exceed HBR2-level bandwidth, not simply any application involving DisplayPort1.4. If DSC is used to reduce the bandwidth requirements to HBR2 levels, then an HBR2-rated cable will be sufficient.
Cable length
The DisplayPort standard does not specify any maximum length for cables, though the DisplayPort 1.2 standard does set a minimum requirement that all cables up to 2 meters in length must support HBR2 speeds (21.6Gbit/s), and all cables of any length must support RBR speeds (6.48Gbit/s). Cables longer than 2 meters may or may not support HBR/HBR2 speeds, and cables of any length may or may not support HBR3 speeds.
Connectors and pin configuration
DisplayPort cables and ports may have either a "full-size" connector or a "mini" connector. These connectors differ only in physical shape—the capabilities of DisplayPort are the same regardless of which connector is used. Using a Mini DisplayPort connector does not affect performance or feature support of the connection.
Full-size DisplayPort connector
The standard DisplayPort connector (now referred to as a "full-size" connector to distinguish it from the mini connector) was the sole connector type introduced in DisplayPort1.0. It is a 20-pin single-orientation connector with a friction lock and an optional mechanical latch. The standard DisplayPort receptacle has dimensions of 16.10mm (width) × 4.76mm (height) × 8.88mm (depth).
The standard DisplayPort connector pin allocation is as follows:
12 pins for the main link – the main link consists of four shielded twisted pairs. Each pair requires 3 pins; one for each of the two wires, and a third for the shield. (pins 1–12)
3 pins for the auxiliary channel – the auxiliary channel uses another 3-pin shielded twisted pair (pins 15–17)
1 pin for HPD – hot-plug detection pin (pin 18)
2 pins for power – 3.3V power and return line (pins 19 and 20)
2 additional ground pins – (pins 13 and 14)
Mini DisplayPort connector
The Mini DisplayPort connector was developed by Apple for use in their computer products. It was first announced in October 2008 for use in the new MacBooks and Cinema Display. In 2009, VESA adopted it as an official standard, and in 2010 the specification was merged into the main DisplayPort standard with the release of DisplayPort1.2. Apple freely licenses the specification to VESA.
The Mini DisplayPort (mDP) connector is a 20-pin single-orientation connector with a friction lock. Unlike the full-size connector, it does not have an option for a mechanical latch. The mDP receptacle has dimensions of 7.50mm (width) × 4.60mm (height) × 4.99mm (depth). The mDP pin assignments are the same as the full-size DisplayPort connector.
DP_PWR (pin 20)
Pin 20 on the DisplayPort connector, called DP_PWR, provides 3.3V (±10%) DC power at up to 500mA (minimum power delivery of 1.5W). This power is available from all DisplayPort receptacles, on both source and display devices. DP_PWR is intended to provide power for adapters, amplified cables, and similar devices, so that a separate power cable is not necessary.
Standard DisplayPort cable connections do not use the DP_PWR pin. Connecting the DP_PWR pins of two devices directly together through a cable can create a short circuit which can potentially damage devices, since the DP_PWR pins on two devices are unlikely to have exactly the same voltage (especially with a ±10% tolerance). For this reason, the DisplayPort1.1 and later standards specify that passive DisplayPort-to-DisplayPort cables must leave pin 20 unconnected.
However, in 2013 VESA announced that after investigating reports of malfunctioning DisplayPort devices, it had discovered that a large number of non-certified vendors were manufacturing their DisplayPort cables with the DP_PWR pin connected:
The stipulation that the DP_PWR wire be omitted from standard DisplayPort cables was not present in the DisplayPort1.0 standard. However, DisplayPort products (and cables) did not begin to appear on the market until 2008, long after version 1.0 had been replaced by version 1.1. The DisplayPort1.0 standard was never implemented in commercial products.
Resolution and refresh frequency limits
The tables below describe the refresh frequencies that can be achieved with each transmission mode. In general, maximum refresh frequency is determined by the transmission mode (RBR, HBR, HBR2, HBR3, UHBR 10, UHBR 13.5, or UHBR 20). These transmission modes were introduced to the DisplayPort standard as follows:
RBR and HBR were defined in the initial release of the DisplayPort standard, version 1.0
HBR2 was introduced in version 1.2
HBR3 was introduced in version 1.3
UHBR 10, UHBR 13.5, and UHBR 20 were introduced in version 2.0
However, transmission mode support is not necessarily dictated by a device's claimed "DisplayPort version number". For example, older versions of the DisplayPort Marketing Guidelines allowed a device to be labeled as "DisplayPort 1.2" if it supported the MST feature, even if it didn't support the HBR2 transmission mode. Newer versions of the guidelines have removed this clause, and currently (as of the June 2018 revision) there are no guidelines on the usage of DisplayPort version numbers in products. DisplayPort "version numbers" are therefore not a reliable indication of what transmission speeds a device can support.
In addition, individual devices may have their own arbitrary limitations beyond transmission speed. For example, NVIDIA Kepler GK104 GPUs (such as the GeForce GTX 680 and 770) support "DisplayPort 1.2" with the HBR2 transmission mode, but are limited to 540Mpx/s, only of the maximum possible with HBR2. Consequently, certain devices may have limitations that differ from those listed in the following tables.
To support a particular format, the source and display devices must both support the required transmission mode, and the DisplayPort cable must also be capable of handling the required bandwidth of that transmission mode. (See: Cables and connectors)
Refresh frequency limits for standard video
Color depth of 8bpc (24bit/px or 16.7 million colors) is assumed for all formats in these tables. This is the standard color depth used on most computer displays. Note that some operating systems refer to this as "32-bit" color depth—this is the same as 24-bit color depth. The 8 extra bits are for alpha channel information, which is only present in software. At the transmission stage, this information has already been incorporated into the primary color channels, so the actual video data transmitted across the cable only contains 24 bits per pixel.
Refresh frequency limits for HDR video
Color depth of 10bpc (30bit/px or 1.07 billion colors) is assumed for all formats in these tables. This color depth is a requirement for various common HDR standards, such as HDR10. It requires 25% more bandwidth than standard 8bpc video.
HDR extensions were defined in version 1.4 of the DisplayPort standard. Some displays support these HDR extensions, but may only implement HBR2 transmission mode if the extra bandwidth of HBR3 is unnecessary (for example, on 4K 60Hz HDR displays). Since there is no definition of what constitutes a "DisplayPort 1.4" device, some manufacturers may choose to label these as "DP 1.2" devices despite their support for DP 1.4 HDR extensions. As a result, DisplayPort "version numbers" should not be used as an indicator of HDR support.
Features
DisplayPort dual-mode (DP++)
DisplayPort Dual-Mode (DP++), also called Dual-Mode DisplayPort, is a standard which allows DisplayPort sources to use simple passive adapters to connect to HDMI or DVI displays. Dual-mode is an optional feature, so not all DisplayPort sources necessarily support DVI/HDMI passive adapters, though in practice nearly all devices do. Officially, the "DP++" logo should be used to indicate a DP port that supports dual-mode, but most modern devices do not use the logo.
Devices which implement dual-mode will detect that a DVI or HDMI adapter is attached, and send DVI/HDMI TMDS signals instead of DisplayPort signals. The original DisplayPort Dual-Mode standard (version 1.0), used in DisplayPort1.1 devices, only supported TMDS clock speeds of up to 165MHz (4.95Gbit/s bandwidth). This is equivalent to HDMI1.2, and is sufficient for up to at 60Hz.
In 2013, VESA released the Dual-Mode 1.1 standard, which added support for up to a 300MHz TMDS clock (9.00Gbit/s bandwidth), and is used in newer DisplayPort1.2 devices. This is slightly less than the 340MHz maximum of HDMI1.4, and is sufficient for up to at 120Hz, at 60Hz, or at 30Hz. Older adapters, which were only capable of the 165MHz speed, were retroactively termed "Type1" adapters, with the new 300MHz adapters being called "Type2".
Dual-mode limitations
Limited adapter speed Although the pinout and digital signal values transmitted by the DP port are identical to a native DVI/HDMI source, the signals are transmitted at DisplayPort's native voltage (3.3V) instead of the 5V used by DVI and HDMI. As a result, dual-mode adapters must contain a level-shifter circuit which changes the voltage. The presence of this circuit places a limit on how quickly the adapter can operate, and therefore newer adapters are required for each higher speed added to the standard.
Unidirectional Although the dual-mode standard specifies a method for DisplayPort sources to output DVI/HDMI signals using simple passive adapters, there is no counterpart standard to give DisplayPort displays the ability to receive DVI/HDMI input signals through passive adapters. As a result, DisplayPort displays can only receive native DisplayPort signals; any DVI or HDMI input signals must be converted to the DisplayPort format with an active conversion device. DVI and HDMI sources cannot be connected to DisplayPort displays using passive adapters.
Single-link DVI only Since DisplayPort dual-mode operates by using the pins of the DisplayPort connector to send DVI/HDMI signals, the 20-pin DisplayPort connector can only produce a single-link DVI signal (which uses 19 pins). A dual-link DVI signal uses 25 pins, and is therefore impossible to transmit natively from a DisplayPort connector through a passive adapter. Dual-link DVI signals can only be produced by converting from native DisplayPort output signals with an active conversion device.
Unavailable on USB-C The DisplayPort Alternate Mode specification for sending DisplayPort signals over a USB-C cable does not include support for the dual-mode protocol. As a result, DP-to-DVI and DP-to-HDMI passive adapters do not function when chained from a USB-C to DP adapter.
Multi-Stream Transport (MST)
Multi-Stream Transport is a feature first introduced in the DisplayPort1.2 standard. It allows multiple independent displays to be driven from a single DP port on the source devices by multiplexing several video streams into a single stream and sending it to a branch device, which demultiplexes the signal into the original streams. Branch devices are commonly found in the form of an MST hub, which plugs into a single DP input port and provides multiple outputs, but it can also be implemented on a display internally to provide a DP output port for daisy-chaining, effectively embedding a 2-port MST hub inside the display. Theoretically, up to 63 displays can be supported, but the combined data rate requirements of all the displays cannot exceed the limits of a single DP port (17.28Gbit/s for a DP1.2 port, or 25.92Gbit/s for a DP 1.3/1.4 port). In addition, the maximum number of links between the source and any device (i.e. the maximum length of a daisy-chain) is 7, and the maximum number of physical output ports on each branch device (such as a hub) is 7. With the release of MST, standard single-display operation has been retroactively named "SST" mode (Single-Stream Transport).
Daisy-chaining is a feature that must be specifically supported by each intermediary display; not all DisplayPort1.2 devices support it. Daisy-chaining requires a dedicated DisplayPort output port on the display. Standard DisplayPort input ports found on most displays cannot be used as a daisy-chain output. Only the last display in the daisy-chain does not need to support the feature specifically or have a DP output port. DisplayPort1.1 displays can also be connected to MST hubs, and can be part of a DisplayPort daisy-chain if it is the last display in the chain.
The host system's software also needs to support MST for hubs or daisy-chains to work. While Microsoft Windows environments have full support for it, Apple operating systems currently do not support MST hubs or DisplayPort daisy-chaining as of macOS 10.15 ("Catalina").
DisplayPort-to-DVI and DisplayPort-to-HDMI adapters/cables may or may not function from an MST output port; support for this depends on the specific device.
MST is supported by USB Type-C DisplayPort Alternate Mode, so standard DisplayPort daisy-chains and MST hubs do function from Type-C sources with a simple Type-C to DisplayPort adapter.
High dynamic range (HDR)
Support for HDR video was introduced in DisplayPort1.4. It implements the CTA 861.3 standard for transport of static HDR metadata in EDID.
Content protection
DisplayPort1.0 includes optional DPCP (DisplayPort Content Protection) from Philips, which uses 128-bit AES encryption. It also features full authentication and session key establishment. Each encryption session is independent, and it has an independent revocation system. This portion of the standard is licensed separately. It also adds the ability to verify the proximity of the receiver and transmitter, a technique intended to ensure users are not bypassing the content protection system to send data out to distant, unauthorized users.
DisplayPort1.1 added optional implementation of industry-standard 56-bit HDCP (High-bandwidth Digital Content Protection) revision 1.3, which requires separate licensing from the Digital Content Protection LLC.
DisplayPort1.3 added support for HDCP2.2, which is also used by HDMI2.0.
Cost
VESA, the creators of the DisplayPort standard, state that the standard is royalty-free to implement. However, in March 2015, MPEG LA issued a press release stating that a royalty rate of $0.20 per unit applies to DisplayPort products manufactured or sold in countries that are covered by one or more of the patents in the MPEG LA license pool, which includes patents from Hitachi Maxell, Philips, Lattice Semiconductor, Rambus, and Sony. In response, VESA updated their DisplayPort FAQ page with the following statement:
As of August 2019, VESA's official FAQ no longer contains a statement mentioning the MPEG LA royalty fees.
While VESA does not charge any per-device royalty fees, VESA requires membership for access to said standards. The minimum cost is presently $5,000 (or $10,000 depending on Annual Corporate Sales Revenue) annually.
Advantages over DVI, VGA and FPD-Link
In December 2010, several computer vendors and display makers including Intel, AMD, Dell, Lenovo, Samsung and LG announced they would begin phasing out FPD-Link, VGA, and DVI-I over the next few years, replacing them with DisplayPort and HDMI.
DisplayPort has several advantages over VGA, DVI, and FPD-Link.
Standard available to all VESA members with an extensible standard to help broad adoption
Fewer lanes with embedded self-clock, reduced EMI with data scrambling and spread spectrum mode
Based on a micro-packet protocol
Allows easy expansion of the standard with multiple data types
Flexible allocation of available bandwidth between audio and video
Multiple video streams over single physical connection (version 1.2)
Long-distance transmission over alternative physical media such as optical fiber (version 1.1a)
High-resolution displays and multiple displays with a single connection, via a hub or daisy-chaining
HBR2 mode with 17.28Gbit/s of effective video bandwidth allows four simultaneous 1080p60 displays (CEA-861 timings), two 2560 × 1600 × 30 bit @ 120Hz (CVT-R timings), or 4K UHD @ 60Hz
HBR3 mode with 25.92Gbit/s of effective video bandwidth, using CVT-R2 timings, allows eight simultaneous 1080p displays (1920 × 1080) @ 60Hz, stereoscopic 4K UHD (3840 × 2160) @ 120Hz, or 5120 × 2880 @ 60Hz each using 24 bit RGB, and up to 8K UHD (7680 × 4320) @ 60Hz using 4:2:0 subsampling
Designed to work for internal chip-to-chip communication
Aimed at replacing internal FPD-Link links to display panels with a unified link interface
Compatible with low-voltage signaling used with sub-micron CMOS fabrication
Can drive display panels directly, eliminating scaling and control circuits and allowing for cheaper and slimmer displays
Link training with adjustable amplitude and preemphasis adapts to differing cable lengths and signal quality
Reduced bandwidth transmission for cable, at least 1920 × 1080p @ 60Hz at 24 bits per pixel
Full bandwidth transmission for
High-speed auxiliary channel for DDC, EDID, MCCS, DPMS, HDCP, adapter identification etc. traffic
Can be used for transmitting bi-directional USB, touch-panel data, CEC, etc.
Self-latching connector
Comparison with HDMI
Although DisplayPort has much of the same functionality as HDMI, it is a complementary connection used in different scenarios. A dual-mode DisplayPort port can emit an HDMI signal via a passive adapter.
As of 2008, HDMI Licensing, LLC charged an annual fee of US$10,000 to each high-volume manufacturer and a per-unit royalty rate of US$0.04 to US$0.15. DisplayPort is royalty-free, but implementers thereof are not prevented from charging (royalty or otherwise) for that implementation.
DisplayPort 1.2 has more bandwidth at 21.6Gbit/s (17.28Gbit/s with overhead removed) as opposed to HDMI 2.0's 18Gbit/s (14.4Gbit/s with overhead removed).
DisplayPort 1.3 raises that to 32.4Gbit/s (25.92Gbit/s with overhead removed), and HDMI 2.1 raises that up to 48Gbit/s (42.67Gbit/s with overhead removed), adding an additional TMDS link in place of clock lane. DisplayPort also has the ability to share this bandwidth with multiple streams of audio and video to separate devices.
DisplayPort has historically had higher bandwidth than the HDMI standard available at the same time. The only exception is from HDMI 2.1 (2017) having higher transmission bandwidth @48Gbit/s than DisplayPort 1.3 (2014) @32.4Gbit/s. DisplayPort 2.0 (2019) retook transmission bandwidth superiority @80.0Gbit/s.
DisplayPort in native mode lacks some HDMI features such as Consumer Electronics Control (CEC) commands. The CEC bus allows linking multiple sources with a single display and controlling any of these devices from any remote. DisplayPort 1.3 added the possibility of transmitting CEC commands over the AUX channel From its very first version HDMI features CEC to support connecting multiple sources to a single display as is typical for a TV screen. The other way round, Multi-Stream Transport allows connecting multiple displays to a single computer source. This reflects the facts that HDMI originated from consumer electronics companies whereas DisplayPort is owned by VESA which started as an organization for computer standards.
HDMI can accept longer maximum cable length than DisplayPort (30 meters vs 15 meters).
HDMI uses unique Vendor-Specific Block structure, which allows for features such as additional color spaces. However, these features can be defined by CEA EDID extensions.
Both HDMI and DisplayPort have published specification for transmitting their signal over the USB-C connector. For more details, see and List of devices with video output over USB-C.
Market share
Figures from IDC show that 5.1% of commercial desktops and 2.1% of commercial notebooks released in 2009 featured DisplayPort. The main factor behind this was the phase-out of VGA, and that both Intel and AMD planned to stop building products with FPD-Link by 2013. Nearly 70% of LCD monitors sold in August 2014 in the US, UK, Germany, Japan, and China were equipped with HDMI/DisplayPort technology, up 7.5% on the year, according to Digitimes Research. IHS Markit, an analytics firm, forecast that DisplayPort would surpass HDMI in 2019.
Companion standards
Mini DisplayPort
Mini DisplayPort (mDP) is a standard announced by Apple in the fourth quarter of 2008. Shortly after announcing Mini DisplayPort, Apple announced that it would license the connector technology with no fee. The following year, in early 2009, VESA announced that Mini DisplayPort would be included in the upcoming DisplayPort 1.2 specification.
On 24 February 2011, Apple and Intel announced Thunderbolt, a successor to Mini DisplayPort which adds support for PCI Express data connections while maintaining backwards compatibility with Mini DisplayPort based peripherals.
Micro DisplayPort
Micro DisplayPort would have targeted systems that need ultra-compact connectors, such as phones, tablets and ultra-portable notebook computers. This standard would have been physically smaller than the currently available Mini DisplayPort connectors. The standard was expected to be released by Q2 2014.
DDM
Direct Drive Monitor (DDM) 1.0 standard was approved in December 2008. It allows for controller-less monitors where the display panel is directly driven by the DisplayPort signal, although the available resolutions and color depth are limited to two-lane operation.
Display Stream Compression
Display Stream Compression (DSC) is a VESA-developed video compression algorithm designed to enable increased display resolutions over existing physical interfaces, and make devices smaller and lighter, with longer battery life. It is a low-latency algorithm based on delta PCM coding and YCC-R color space. To achieve its goals, DSC uses "visually lossless" compression, a lossy form of compression described as being that in which "the user cannot tell the difference between a compressed and uncompressed image". The ISO/IEC 29170 standard more specifically defines an algorithm as visually lossless "when all the observers fail to correctly identify the reference image more than 75% of the trials". However, the standard allows for images that "exhibit particularly strong artefacts" to be excluded from testing, and it also allows the experimenter to deem certain kinds of artefacts acceptable in engineered test images. Research of DSC using the ISO/IEC 29170 interleaved protocol, in which an uncompressed reference image is presented side by side with a rapidly alternating sequence of the compressed test image and uncompressed reference image, and performed with various types of images (such as people, natural and man-made scenery, text, and known challenging imagery) shows that although in many (sometimes most) cases DSC satisfied the standard's criterion for visually lossless performance, in many trials participants were nonetheless able to detect the presence of compression.
DSC compression works on a horizontal line of pixels encoded using groups of three consecutive pixels for native 4:4:4 and simple 4:2:2 formats, or six pixels (three compressed containers) for native 4:2:2 and 4:2:0 formats. If RGB encoding is used, it is first converted to reversible YCC. Simple conversion from 4:2:2 to 4:4:4 can add missing chroma samples by interpolating neighboring pixels. Each luma component is coded separately using three independent substreams (four substreams in native 4:2:2 mode). Prediction step is performed using one of the three modes: modified median adaptive coding (MMAP) algorithm similar to the one used by JPEG-LS, block prediction (optional for decoders due to high computational complexity, negotiated at DSC handshake), and midpoint prediction. Bit rate control algorithm tracks color flatness and buffer fullness to adjust the quantization bit depth for a pixel group in a way that minimizes compression artifacts while staying within the bitrate limits. Repeating recent pixels can be stored in 32-entry Indexed Color History (ICH) buffer, which can be referenced directly by each group in a slice; this improves compression quality of computer-generated images. Alternatively, prediction residuals are computed and encoded with entropy coding algorithm based on delta size unit-variable length coding (DSU-VLC). Encoded pixel groups are then combined into slices of various height and width; common combinations include 100% or 25% picture width, and 8-, 32-, or 108-line height.
A modified version of DSC, VDC-M, is used in DSI-2. It allows for more compression at 6 bit/px at the cost of higher algorithmic complexity.
DSC version 1.0 was released on 10 March 2014, but was soon deprecated by DSC version 1.1 released on 1 August 2014. The DSC standard supports up to a compression ratio (reducing the data stream to 8 bits per pixel) with constant or variable bit rate, RGB or 4:4:4, 4:2:2, or 4:2:0 color format, and color depth of 6, 8, 10, or 12 bits per color component.
DSC version 1.2 was released on 27 January 2016 and is included in version 1.4 of the DisplayPort standard; DSC version 1.2a was released on 18 January 2017. The update includes native encoding of 4:2:2 and 4:2:0 formats in pixel containers, 14/16 bits per color, and minor modifications to the encoding algorithm.
On 4 January 2017, HDMI 2.1 was announced which supports up to 10K resolution and uses DSC 1.2 for video that is higher than 8K resolution with 4:2:0 chroma subsampling.
eDP
Embedded DisplayPort (eDP) is a display panel interface standard for portable and embedded devices. It defines the signaling interface between graphics cards and integrated displays. The various revisions of eDP are based on existing DisplayPort standards. However, version numbers between the two standards are not interchangeable. For instance, eDP version 1.4 is based on DisplayPort 1.2, while eDP version 1.4a is based on DisplayPort 1.3. In practice, embedded DisplayPort has displaced LVDS as the predominant panel interface in modern laptops.
eDP 1.0 was adopted in December 2008. It included advanced power-saving features such as seamless refresh rate switching.
Version 1.1 was approved in October 2009 followed by version 1.1a in November 2009.
Version 1.2 was approved in May 2010 and includes DisplayPort 1.2 HBR2 data rates, 120Hz sequential color monitors, and a new display panel control protocol that works through the AUX channel.
Version 1.3 was published in February 2011; it includes a new optional Panel Self-Refresh (PSR) feature developed to save system power and further extend battery life in portable PC systems. PSR mode allows the GPU to enter a power saving state in between frame updates by including framebuffer memory in the display panel controller.
Version 1.4 was released in February 2013; it reduces power consumption through partial-frame updates in PSR mode, regional backlight control, lower interface voltages, and additional link rates; the auxiliary channel supports multi-touch panel data to accommodate different form factors. Version 1.4a was published in February 2015; the underlying DisplayPort version was updated to 1.3 in order to support HBR3 data rates, Display Stream Compression 1.1, Segmented Panel Displays, and partial updates for Panel Self-Refresh. Version 1.4b was published in October 2015; its protocol refinements and clarifications are intended to enable adoption of eDP 1.4b in devices by mid-2016. Version 1.5 was published in October 2021; adds new features and protocols, including enhanced support for Adaptive-Sync, that provide additional power savings and improved gaming and media playback performance.
iDP
Internal DisplayPort (iDP) 1.0 was approved in April 2010. The iDP standard defines an internal link between a digital TV system on a chip controller and the display panel's timing controller. It aims to replace currently used internal FPD-Link lanes with a DisplayPort connection. iDP features a unique physical interface and protocols, which are not directly compatible with DisplayPort and are not applicable to external connection, however they enable very high resolution and refresh rates while providing simplicity and extensibility. iDP features a non-variable 2.7GHz clock and is nominally rated at 3.24Gbit/s per lane, with up to sixteen lanes in a bank, resulting in a six-fold decrease in wiring requirements over FPD-Link for a 1080p24 signal; other data rates are also possible. iDP was built with simplicity in mind so doesn't have an AUX channel, content protection, or multiple streams; it does however have frame sequential and line interleaved stereo 3D.
PDMI
Portable Digital Media Interface (PDMI) is an interconnection between docking stations/display devices and portable media players, which includes 2-lane DisplayPort v1.1a connection. It has been ratified in February 2010 as ANSI/CEA-2017-A.
wDP
Wireless DisplayPort (wDP) enables the bandwidth and feature set of DisplayPort 1.2 for cable-free applications operating in the 60GHz radio band. It was announced in November 2010 by WiGig Alliance and VESA as a cooperative effort.
SlimPort
SlimPort, a brand of Analogix products, complies with Mobility DisplayPort, also known as MyDP, which is an industry standard for a mobile audio/video Interface, providing connectivity from mobile devices to external displays and HDTVs. SlimPort implements the transmission of video up to 4K-UltraHD and up to eight channels of audio over the micro-USB connector to an external converter accessory or display device. SlimPort products support seamless connectivity to DisplayPort, HDMI and VGA displays. The MyDP standard was released in June 2012, and the first product to use SlimPort was Google's Nexus 4 smartphone. Some LG smartphones in LG G series also adopted SlimPort.
SlimPort is an alternative to Mobile High-Definition Link (MHL).
DisplayID
DisplayID is designed to replace the E-EDID standard. DisplayID features variable-length structures which encompass all existing EDID extensions as well as new extensions for 3D displays and embedded displays.
The latest version 1.3 (announced on 23 September 2013) adds enhanced support for tiled display topologies; it allows better identification of multiple video streams, and reports bezel size and locations. As of December 2013, many current 4K displays use a tiled topology, but lack a standard way to report to the video source which tile is left and which is right. These early 4K displays, for manufacturing reasons, typically use two 1920×2160 panels laminated together and are currently generally treated as multiple-monitor setups. DisplayID 1.3 also allows 8K display discovery, and has applications in stereo 3D, where multiple video streams are used.
DockPort
DockPort, formerly known as Lightning Bolt, is an extension to DisplayPort to include USB 3.0 data as well as power for charging portable devices from attached external displays. Originally developed by AMD and Texas Instruments, it has been announced as a VESA specification in 2014.
USB-C
On 22 September 2014, VESA published the DisplayPort Alternate Mode on USB Type-C Connector Standard, a specification on how to send DisplayPort signals over the newly released USB-C connector. One, two or all four of the differential pairs that USB uses for the SuperSpeed bus can be configured dynamically to be used for DisplayPort lanes. In the first two cases, the connector still can carry a full SuperSpeed signal; in the latter case, at least a non-SuperSpeed signal is available. The DisplayPort AUX channel is also supported over the two sideband signals over the same connection; furthermore, USB Power Delivery according to the newly expanded USB-PD 2.0 specification is possible at the same time. This makes the Type-C connector a strict superset of the use-cases envisioned for DockPort, SlimPort, Mini and Micro DisplayPort.
VirtualLink
VirtualLink is a proposal that allows the power, video, and data required to drive virtual reality headsets to be delivered over a single USB-C cable.
Products
Since its introduction in 2006, DisplayPort has gained popularity within the computer industry and is featured on many graphic cards, displays, and notebook computers. Dell was the first company to introduce a consumer product with a DisplayPort connector, the Dell UltraSharp 3008WFP, which was released in January 2008. Soon after, AMD and Nvidia released products to support the technology. AMD included support in the Radeon HD 3000 series of graphics cards, while Nvidia first introduced support in the GeForce 9 series starting with the GeForce 9600 GT.
Later the same year, Apple introduced several products featuring a Mini DisplayPort. The new connector proprietary at the time eventually became part of the DisplayPort standard, however Apple reserves the right to void the license should the licensee "commence an action for patent infringement against Apple". In 2009, AMD followed suit with their Radeon HD 5000 Series of graphics cards, which featured the Mini DisplayPort on the Eyefinity versions in the series.
Nvidia launched NVS 810 with 8 Mini DisplayPort outputs on a single card on 4 November 2015.
Nvidia revealed the GeForce GTX 1080, the world's first graphics card with DisplayPort 1.4 support on 6 May 2016. AMD followed with the Radeon RX 480 to support DisplayPort 1.3/1.4 on 29 June 2016. The Radeon RX 400 Series will support DisplayPort 1.3 HBR and HDR10, dropping the DVI connector(s) in the reference board design.
In February 2017, VESA and Qualcomm announced that DisplayPort Alt Mode video transport will be integrated into the Snapdragon 835 mobile chipset, which powers smartphones, VR/AR head-mounted displays, IP cameras, tablets and mobile PCs.
Support for DisplayPort Alternate Mode over USB-C
Currently, DisplayPort is the most widely implemented alternate mode, and is used to provide video output on devices that do not have standard-size DisplayPort or HDMI ports, such as smartphones, tablets, and laptops. A USB-C multiport adapter converts the device's native video stream to DisplayPort/HDMI/VGA, allowing it to be displayed on an external display, such as a television set or computer monitor.
Examples of devices that support DisplayPort Alternate Mode over USB-C include: MacBook, Chromebook Pixel, Surface Book 2, Samsung Galaxy Tab S4, iPad Pro (3rd generation), HTC 10/U Ultra/U11/U12+, Huawei Mate 10/20/30, LG V20/V30/V40*/V50, OnePlus 7 and newer, ROG Phone, Samsung Galaxy S8 and newer, Sony Xperia 1/5 etc.
Participating companies
The following companies have participated in preparing the drafts of DisplayPort, eDP, iDP, DDM or DSC standards:
Agilent
Altera
AMD Graphics Product Group
Analogix
Apple
Astrodesign
BenQ
Broadcom Corporation
Chi Mei Optoelectronics
Chrontel
Dell
Display Labs
Foxconn Electronics
FuturePlus Systems
Genesis Microchip
Gigabyte Technology
Hardent
Hewlett-Packard
Hosiden
Hirose Electric Group
Intel
intoPIX
I-PEX
Integrated Device Technology
JAE Electronics
Kawasaki Microelectronics (K-Micro)
Keysight Technologies
Lenovo
LG Display
Luxtera
Molex
NEC
NVIDIA
NXP Semiconductors
Xi3 Corporation
Parade Technologies
Realtek Semiconductor
Samsung
SMK
STMicroelectronics
SyntheSys Research Inc.
Tektronix
Texas Instruments
TLi
Tyco Electronics
ViewSonic
VTM
The following companies have additionally announced their intention to implement DisplayPort, eDP or iDP:
Acer
ASRock
Biostar
Chroma
BlackBerry
Circuit Assembly
DataPro
Eizo
Fujitsu
Hall Research Technologies
ITE Tech.
Matrox Graphics
Micro-Star International
MStar Semiconductor
Novatek Microelectronics Corp.
Palit Microsystems Ltd.
Pioneer Corporation
S3 Graphics
Toshiba
Philips
Quantum Data
Sparkle Computer
Unigraf
Xitrix
See also
HDBaseT
HDMI
List of video connectors
Thunderbolt (interface)
Notes
References
External links
the official site operated by VESA
Digital display connectors
VESA
Computer connectors
Serial buses
|
4051223
|
https://en.wikipedia.org/wiki/Seqlock
|
Seqlock
|
A seqlock (short for sequence lock) is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines. The semantics stabilized as of version 2.5.59, and they are present in the 2.6.x stable kernel series. The seqlocks were developed by Stephen Hemminger and originally called frlocks, based on earlier work by Andrea Arcangeli. The first implementation was in the x86-64 time code where it was needed to synchronize with user space where it was not possible to use a real lock.
It is a reader–writer consistent mechanism which avoids the problem of writer starvation. A seqlock consists of storage for saving a sequence number in addition to a lock. The lock is to support synchronization between two writers and the counter is for indicating consistency in readers. In addition to updating the shared data, the writer increments the sequence number, both after acquiring the lock and before releasing the lock. Readers read the sequence number before and after reading the shared data. If the sequence number is odd on either occasion, a writer had taken the lock while the data was being read and it may have changed. If the sequence numbers are different, a writer has changed the data while it was being read. In either case readers simply retry (using a loop) until they read the same even sequence number before and after.
The reader never blocks, but it may have to retry if a write is in progress; this speeds up the readers in the case where the data was not modified, since they do not have to acquire the lock as they would with a traditional read–write lock. Also, writers do not wait for readers, whereas with traditional read–write locks they do, leading to potential resource starvation in a situation where there are a number of readers (because the writer must wait for there to be no readers). Because of these two factors, seqlocks are more efficient than traditional read–write locks for the situation where there are many readers and few writers. The drawback is that if there is too much write activity or the reader is too slow, they might livelock (and the readers may starve).
The technique will not work for data that contains pointers, because any writer could invalidate a pointer that a reader has already followed. In this case, using read-copy-update synchronization is preferred.
This was first applied to system time counter updating. Each time interrupt updates the time of the day; there may be many readers of the time for operating system internal use and applications, but writes are relatively infrequent and only occur one at a time. The BSD timecounter code for instance appears to use a similar technique.
One subtle issue of using seqlocks for a time counter is that it is impossible to step through it with a debugger. The retry logic will trigger all the time because the debugger is slow enough to make the read race occur always.
See also
Synchronization
Spinlock
References
fast reader/writer lock for gettimeofday 2.5.30
Effective synchronisation on Linux systems
Driver porting: mutual exclusion with seqlocks
Simple seqlock implementation
Improved seqlock algorithm with lock-free readers
Seqlocks and Memory Models(slides)
Concurrency control
Linux kernel
|
2809644
|
https://en.wikipedia.org/wiki/Mulberry%20%28email%20client%29
|
Mulberry (email client)
|
Mulberry is an open-source email client marketed by Cyrusoft from approximately 1995 to 2005. On October 1, 2005, Cyrusoft International, Inc./ISAMET, declared Chapter 7 bankruptcy and went out of business. In August 2006, rights to the source code were acquired by Cyrus Daboo, the original author.
Originally developed for the Apple Macintosh, versions now exist for that platform as well as Microsoft Windows and Linux using the X window system. Mulberry's strengths include strict compliance with Internet standards such as IMAP, LDAP, IMSP, ACAP, and iCalendar, a unique GUI for defining Sieve (mail filtering language) scripts and support for IMAP disconnected operation.
As of August 20, 2006, with version 4.0.5, Mulberry was made available at no cost, but remained proprietary. The most recent version, 4.0.8, was released for Macintosh (as a Universal Binary), Windows, and Linux on February 23, 2007. However, Cyrus Daboo made Mulberry available as open source on all three platforms on November 21, 2007, under the terms of the Apache License 2.0.
External links
Mulberry Guide at the University of Sussex
References
Companies that have filed for Chapter 7 bankruptcy
Formerly proprietary software
Email client software for Linux
Classic Mac OS email clients
MacOS email clients
Windows email clients
1995 software
|
29609545
|
https://en.wikipedia.org/wiki/Norwegian%20Public%20Safety%20Network
|
Norwegian Public Safety Network
|
The Norwegian Public Safety Network ( literally Emergency Network) is a public safety network system based on Terrestrial Trunked Radio (TETRA). Nødnett is implemented by the Directorate for Emergency Communication (). The network is primarily used for internal and interdisciplinary communication by the police, fire departments and health services. Nødnett is also used by several organisations participating in rescue and emergency work. Planning of the network started in 1995 and in 2006 the contract to build it was awarded to Nokia Siemens Networks. As Nokia Siemens Networks was unable to complete the contract, it was passed on to Motorola Solutions in 2012. The critical infrastructure of Nødnett was finished and was operational in all districts of mainland Norway by December 1, 2015.
The network replaced nearly 300 local and regional networks which operated independently for the fire, police and healthcare agencies. Nødnett allows functionality such as authentication, encryption and higher reliability.
Background and choice of technology
Prior to the introduction of Nødnett, Norway had three separate systems for telecommunications within the police, fire departments and paramedics, all based on analog radio. The old system had two main downsides: it was not encrypted, and it prevented communication between agencies. This was particularly problematic in larger disasters and accidents, and in instances where criminals listened to the police radio during police actions. The Norwegian Data Inspectorate had also instructed the agencies to encrypt their communications for reasons of privacy. This would either have to be done through an expensive upgrade to the existing systems, or through the construction of a new, digital network.
Another issue is using standardized technology for communication with agencies in other countries. Norway is a member of the Schengen Agreement, which requires trans-border communication between law enforcement agencies. There were 27 different networks for the police, one for each police district. In Oslo, Akershus and Østfold, the police had also been using Enhanced Digital Access Communication System since 1994. There were 230 municipal fire department radio systems, and a manual mobile phone system for the health sector. The health network was built by the county municipalities between 1990 and 1995 and covers all parts of the health service, including paramedics, ambulance services, midwives and medical doctors. The various systems had different levels of coverage. In addition, Global System for Mobile Communication (GSM) and Nordic Mobile Telephone (NMT450) telephones were being used where encrypted communication was necessary.
Keeping the old systems and converting them to encrypted systems was also considered. This was estimated to cost NOK 500 million to install, but could not be guaranteed to work satisfactorily. In particular, encryption would delay communications, which would be a problem for urgent communications. It was also uncertain whether the level of encryption would be sufficient to allow the network to be considered closed and allow personal information to be transmitted.
The government considered using a similar procurement solution to that in Denmark, where the spectrum was licensed to private enterprise, and the agencies purchase services from private telecommunications companies, based on conventional GSM technology. However, in Denmark this had not led to the desired results, with only Metropolitan Copenhagen being covered. Instead, the Norwegian Government chose to establish a government agency to build and operate the network. Use of the GSM and NMT450 network was insufficient because of lack of capacity in the conventional network in case of larger amounts of communication, lack of ability of group conversations, lack of priority systems and long dial-up times.
Using conventional GSM systems was rejected also because GSM lacks many of the functionalities of TETRA, such as group conversations, dispatcher centers, and direct communication. In addition, Global System for Mobile Communication – Railway (GSM-R) was considered, but rejected because of the lack of trans-border functionality and the need for more base stations, and thus higher investment costs, and longer start-up time for calls. The technology was considered because the Norwegian National Rail Administration was at the time building a GSM-R network to cover the entire Norwegian railway network. Another reason that TETRA was preferred was that at the time of decision there were five manufacturers of TETRA equipment and only two for GSM-R. TETRA also allows a fall-back system, where a base station can allow communication between users within the range of the base station, even if the central parts of the network break down.
In a parliamentary hearing in 2002 both DNK director Tor Helge Lyngstøl and Minister of Justice, Odd Einar Dørum, stated that the choice of TETRA would provide sufficient data capacity. In a parliamentary decision in 2004 it was decided to opt for the open European Telecommunications Standards Institute (ETSI) as a data transmission standard, which is used by all other police TETRA systems in Europe, but this was later changed by the directorate to the proprietary TETRA Enhanced Data Service (TEDS) owned by Motorola. The latter would limit the number of supplies and would increase the investment costs.
In 2000, the annual cost of agency communication was NOK 175 million, while this had increased to NOK 260 million in 2004. The increase was largely caused by the increase in use of mobile telephones. The costs of the fire department networks was paid for by the municipalities, the health network paid for by the municipalities and the regional health authorities, and the police networks by the respective police districts.
Implementation
Work with the system started in 1995, when the Norwegian Board of Health Supervision took initiative for a new mobile telecommunications platform. The issue was coordinated by the Ministry of Justice, and the issue was first discussed politically in 1997, and in 1998 a project group was created. In 2001, a pilot project was established in Trondheim, which included all three agencies. The trial was successful and terminated in June 2003. Later that year, the Parliament of Norway made the principal decision to establish the network. Quality control of the project was concluded in June 2004, and construction was estimated at NOK 3.6 billion.
The procurement process was initially led by the Ministry of Justice and the Police, in cooperation with the Ministry of Health and Care Services, the National Police Directorate, the Directorate for Health and Social Affairs and the Directorate for Civil Defence and Emergency Planning. The public tender was launched in May 2005, and on 22 December 2006 the contract was signed with Nokia Siemens Networks. The project is the largest single information technology contract ever awarded in Norway. The Directorate for Emergency Communication was established on 1 April 2007.
Original plans called for the system to be built between 2007 and 2011. Implementation was planned in six phases, numbered zero through five. Between phases zero and one, an evaluation of the process was planned.
By June 2007, the project was delayed by half a year. One of the major delays in the project has been the development of the software for the health sector's communication centers—which consist of emergency wards, casualty wards, emergency dispatch centers and aircraft coordination centers. The system is being developed by Frequentis in Austria, who have stated that they did not receive sufficient specifications. In December 2009, the state granted NOK 110 million extra for development of the system. Health workers will therefore be taking the network into use in May 2010, after the police and fire departments in Follo and Østfold. Representatives for the Police Directorate have criticized the implementation model and stated that in most other countries, the system was implemented first just for the police and afterwards taken into use by the fire and paramedic agencies. For instance, Østfold Police District had installed a new center in February 2008, but had to wait 21 months to take it into use while waiting for Public Safety Radio.
The Police Directorate sees the use of the encrypted communication as the system's greatest benefit, and has stated that it sees no reason for the implementation to stop while it is being evaluated, and that there is no alternative to implementing it nationally. The system was first taken into use in Østfold and Follo in December 2009, and by Oslo in March 2010. In Oslo, the police chose to close the analog network down before the TETRA system had been installed in all vehicles, and instead give all officers hand-held devices, to speed up the closing of the old network, which is regarded as a security hazard. Traditionally, journalists have learned about events by listening to the police radios. The police have appointed press officers who will inform the press about newsworthy incidents. The alarm center for the fire departments in Østfold and Follo started using the system in June 2010.
In August 2010, the emergency health communication centers in Østfold and the casualty ward at Fredrikstad Hospital started using the system. This was followed by the emergency rooms in Halden and Aremark, in Rakkestad and Sarpsborg, and in Oslo. For the health sector, phase zero involved 40 communication centers, of which 20 were emergency rooms, 16 casualty wards at hospitals, one air ambulance coordination center and three emergency health communication centers, in addition to radios in the 150 ambulances that serves the region.
The official opening of the network took place on 17 August 2010. In October 2010, Arne Johannesen, the leader of the Norwegian Police Federation, stated that he wanted to place the building of the radio network on hold and instead use the funding for a new information technology system for the police force, named D#2.
DNK carried out tests with the system in 2010 for firefighters using self-contained breathing apparatus in structure fires, and found the system to be sufficient. Similar test were carried out by Oslo Fire Department later that year, and they found that the radio system was insufficient for their needs. Oslo Fire Department concluded that the DNK tests were only successful because of the use of additional directional gateway/repeater-radio equipment. Because of this firefighters in Oslo continued to use the old ultra-high frequency radios during indoor fires. Both the Norwegian Police Security Service's bodyguard service and the service for protection of the royal family have opted to not use the new radio system, citing poor coverage indoors and while lying on the ground, even in downtown Oslo. The services have stated that this does not allow for interoperability with other agencies, which is a drawback in case of major incidents. Also the joint rescue coordination centers, the Norwegian Air Ambulance and the 330 Squadron which operates Westland Sea King search and rescue helicopters have opted out of using the system because of poor coverage. During the 2011 Norway attacks at Utøya, located in northern Buskerud, police officers from surrounding police districts were not able to communicate with local police because the area did not have coverage for the TETRA system.
Organization
Nødnett is owned by the Directorate for Emergency Communication, which is subordinate to the Ministry of Justice. The ministry signed an agreement with Nokia Siemens Networks to install the system. In 2012, Motorola Solutions signed an agreement to take over the project.
The directorate is led by Tor Helge Lyngstøl and has its offices in Nydalen in Oslo.
The cost of constructing the network has been covered by the ministry. The costs of operating and maintaining the network are covered by the users, who also purchase their own terminals. Payment to the directorate is by an annual subscription fee per terminal, based on the terminal's use. For a terminal only used for stand-by, the annual subscription cost is NOK 1,700 per year (2016), while that for a terminal in a control room is NOK 45,000 (2016). As the cost of running the network is fixed independently of the amount of traffic, there is no cost for using the network. As additional users start implementing the system, the costs per subscriber will be reduced.
Network
The Terrestrial Trunked Radio network has three components: the core net, which is a centralized computer center based on an Internet Protocol structure; the transmission net, which connects the core net, the radio net and other connection points with high-capacity lines; and the radio net, which consists of about 2100 base stations with antennas in masts, on buildings and in some tunnels. The network is controlled by Motorola. In case a base station no longer can communicate with the core net, the base station can still relay communication within its range. Should the base station fall out or operations occur in areas without coverage, the terminals can communicate directly with each other.
All communication from mobile terminals to the base stations is encrypted with a key known only to the base station and the terminal. For group conversations, two keys are used, one from the terminal to the base station, and one from the base station to all users. In addition, there are 32 fixed keys used for terminal-to-terminal communication should the base station fall out. In addition, the police can use a user-to-user encryption where the communication is encrypted all the way through the network from the one user to the other.
Nødnett ensures 100% population coverage and 86% area coverage, which exceeds any of the existing GSM networks. This includes good coverage indoors, to aid fire fighters, as well as full coverage of the coastline and coverage up to 2,500 meters (8,000 ft) height for aircraft. Nødnett gives full coverage along all national and county roads. The system also allows interoperability towards the maritime radio. The network also allows for transmission of data at a speed of 12-13 kbit/s.
There has been raised criticism against several fundamental shortcomings in the network system. The most fundamental is the lack of indoor coverage. This has in part been reversed by increasing the signal strength in urban areas and installing repeaters at for instance medical clinics, Oslo Courthouse and Oslo Airport, Gardermoen. Other shortcomings are that the location of base stations are publicly known, allowing for easy sabotage and increased investment costs because of the choice of the proprietary TEDS instead of the open ETSI system.
Terminals
The system has two types of receivers: radio terminals, which can either be hand-held or mounted in vehicles, and desktop equipment for control centers. The system will include 40,000 radios throughout the country. Compared to the analog network, the digital radio equipment will be smaller and have options for additional equipment such as hands free, and allow special radios for motorcycles, snowmobiles, boats, undercover activities and smoke diving. Communication can either be performed as one-to-one conversations, group calls for predefined or ad-hoc groups, with radios able to be part of several groups, or as walkie talkies in areas without network coverage. The digital transmission reduced background noise and allows monitoring terminal identity to prohibit unauthorized use. All radios are equipped with an emergency button that will give priority in the network.
Control room terminals will have new functionality including identification of all users and radio terminal positioning, radio and telephone inquiries made on the same equipment, use of either loudspeakers or head sets, and allowing operators to listen to each other's conversations. Operators have access to telephone books and speed dials, touch screen operations of voice and data traffic, monitoring of other talk groups, simultaneous calls to several talk groups and access to voice logs.
References
External links
Directorate for Emergency Communication
Public safety networks
Telecommunications in Norway
Law enforcement in Norway
Medical and health organisations based in Norway
Emergency management in Norway
2009 establishments in Norway
|
33242813
|
https://en.wikipedia.org/wiki/Top%20Model.%20Zosta%C5%84%20modelk%C4%85%20%28season%202%29
|
Top Model. Zostań modelką (season 2)
|
Top Model. Zostań modelką, Cycle 2 (Polish for Top Model. Become a Model) is the second Cycle of an ongoing reality documentary based on Tyra Banks' America's Next Top Model that pits contestants from Poland against each other in a variety of competitions to determine who will win the title of the next Polish Top Model and a lucrative modeling contract with NEXT Model Management as well as an appearance on the cover of the Polish issue of Glamour and a nationwide Max Factor campaign in hopes of a successful future in the modeling business. The competition was hosted by Polish-born model Joanna Krupa who served as the lead judge alongside fashion designer Dawid Woliński, journalist Karolina Korwin-Piotrowska and photographer Marcin Tyszka.
The international destinations this cycle were Paris and Mombasa. The winner of the competition was 19-year-old Olga Kaczyńska from Wroclaw.
Contestants
(ages stated are at start of contest)
Episodes
Episode 1
Original Air Date: September 7, 2011
The judges begin their search for Poland's Next Top Model, and hold castings across the country.
Episode 2
Original Air Date: September 14, 2011
The search continues as hopefuls make their way into the semi-finals in the hope of becoming Poland's next top model.
Episode 3
Original Air Date: September 21, 2011
The remaining spots for the semi-finals are finally filled, and the top fifty contestants are taken to the Polish countryside. They take part in a lingerie fashion show, and it's goodbye for three of the semi-finalists as their dream comes to an end.
Episode 4
Original Air Date: September 28, 2011
The final round of boot-camp takes place. After an elimination, the remaining semi-finalists are challenged to perform in front of the camera with their fellow models. The girls are divided into four groups with different themes, all having to do with farm life.
Names in bold represent eliminated semi-finalists
After reviewing all the pictures, Joanna and the judges choose the final thirteen models who will compete for the title of Poland's Next Top Model.
Episode 5
Original Air Date: October 5, 2011
Challenge winner: Vera Suprunenko
First call-out: Asia Kudzbalska
Bottom three: Ania Bałon, Gabrysia Pacholarz & Karolina Henning
Eliminated: Karolina Henning
Featured photographer: Robert Wolański
Special guests: Anja Rubik
Episode 6
Original Air Date: October 12, 2011
Challenge winner: Viktoria Driuk
First call-out: Ania Bałon
Bottom three: Asia Kudzbalska, Dorota Trojanowska & Gabrysia Pacholarz
Eliminated: Gabrysia Pacholarz
Featured photographer: Daniel Duniak & Grzegorz Korzeniowski
Special guests: Wayne Sterling
Episode 7
Original Air Date: October 19, 2011
First call-out: None, since there were no call-outs due to Magda being automatically eliminated from the competition for not participating in the photo shoot. In episode 8, however, Olga was told she would have been the first call-out had they been held.
Eliminated: Magda Roman
Featured photographer: Wojtek Wojtczak & Iza Grzybowska
Special guests: Anja Rubik
Episode 8
Original Air Date: October 26, 2011
First call-out: Dorota Trojanowska
Bottom two: Asia Kudzbalska & Natalia Piaskowska
Eliminated: Natalia Piaskowska
Featured photographer: Łukasz Ziętek
Special guests: Magda Gessler, Martyna Wojciechowska, Teresa Rosati, Tomasz Jacyków, Ewa Minge, Robert Kupisz, Gosia Baczyńska
Episode 9
Original Air Date: November 2, 2011
Immune/first call-out: Michalina Manios
Bottom two: Ania Bałon & Vera Suprunenko
Disqualified: Vera Suprunenko
Featured photographer: Emil Biliński
Special guests: Kazimierz Mazur, Asia Jabłyczńska, Tomasz Schimscheiner
Episode 10
Original Air Date: November 9, 2011
First call-out: Asia Kudzbalska
Bottom three: Dorota Trojanowska, Oliwia Downar-Dukowicz & Viktoria Driuk
Eliminated: Viktoria Driuk
Featured photographer: Aldona Karczmarczyk
Special guests: Ania Jurgaś
Episode 11
Original Air Date: November 16, 2011
First call-out: Honorata Wojtkowska
Bottom four: Ania Bałon, Asia Kudzbalska, Dorota Trojanowska & Oliwia Downar-Dukowicz
Eliminated: Dorota Trojanowska & Oliwia Downar-Dukowicz
Featured photographer: Marcin Tyszka
Special guests: Peyman Amin
Episode 12
Original Air Date: November 23, 2011
First call-out: Olga Kaczyńska
Bottom two: Honorata Wojtkowska & Michalina Manios
Eliminated: Honorata Wojtkowska
Featured photographer: Andre Rau
Special guests: Vincent McDoom
Episode 13
Original Air Date: November 30, 2011
First call-out: Olga Kaczyńska
Bottom two: Asia Kudzbalska & Michalina Manios
Eliminated: Asia Kudzbalska
Featured photographer: Guillaume Malheiro
Special guests: Michał Piróg
Episode 14
Original Air Date: December 7, 2011
Final three: Ania Bałon, Michalina Manios & Olga Kaczyńska
Eliminated: Michalina Manios
Final two: Ania Bałon & Olga Kaczyńska
Poland's Next Top Model: Olga Kaczyńska
Special guests: Anja Rubik, Joel Wilkenfeld (President of NEXT NY)
Summaries
Call-out order
The contestant was eliminated
The contestant was immune from elimination
The contestant was disqualified from the competition
The contestant was the original eliminee, but was saved
The contestant won the competition
Episodes 1, 2, 3 and 4 were casting episodes. In episode 4, the pool of semi-finalists was reduced to the final 13 models who moved on to the main competition.
Episodes 5 and 6 featured the bottom three contestants were in danger of elimination.
In episode 7, Magda was automatically eliminated from the competition at panel for not participating in the photo shoot. Therefore, no call-out was held that episode. In the next episode it was revealed that Olga would have received best photo.
In episode 9, the best-performing contestant from each photo shoot pair was deemed immune at panel. Earlier that week Vera had made a phone call from her cellphone, which was against the rules of the show. She was disqualified from the competition at panel when she landed in the bottom two, automatically saving Ania from elimination.
In episode 11, Ania, Dorota, Asia and Oliwia landed in the bottom four. Dorota was called and was told she was eliminated. Joanna handed the last two photos to Ania and Asia, eliminating Oliwia.
Photo shoot guide
Episode 4 photo shoot: Country girls in groups (semifinals)
Episode 5 photo shoot: Posing with nude male model Marcin Iwo Kosiński
Episode 6 photo shoot: Suspended in the air
Episode 7 photo shoot: Nude fashion editorial
Episode 8 photo shoot: Food kitchen editorial
Episode 9 photo shoot: Film warriors
Episode 10 photo shoot: Portraying model stereotypes
Episode 11 photo shoots: Hollywood icon impersonations; natural & sexy beauty
Episode 12 photo shoot: High fashion in the streets of Paris
Episode 13 photo shoot: Futuristic fashion
Episode 14 photo shoot: Glamour magazine covers & spreads in Kenya
Post Top Model agencies
Olga Kaczyńska: has been signed to D'Vision Models in Warsaw, NEXT Model Management in London, Paris and Milan, Chic Model Management in Sydney, and mc2 Models in Tel Aviv.
Ania Bałon: has been signed to D'Vision Models Classic board in Warsaw.
Asia Kudzbalska: has been signed to D'Vision Models in Warsaw, Elite Model Management in Tel Aviv, and Brave Model Management in Milan.
Honorata Wojtkowska: has been signed to D'Vision Models in Warsaw.
Oliwia Downar-Dukowicz: has been signed to D'Vision Models Classic board in Warsaw.
Viktoria Driuk: has been signed to D'Vision Models Classic board in Warsaw.
Vera Suprunenko: has been signed to D'Vision Models in Warsaw.
Magda Roman: has been signed to D'Vision Models in Warsaw.
Karolina Hennig: has been signed to D'Vision Models Classic board in Warsaw.
Ratings
References
1
2011 Polish television seasons
|
1546093
|
https://en.wikipedia.org/wiki/UNIX/32V
|
UNIX/32V
|
UNIX/32V was an early version of the Unix operating system from Bell Laboratories, released in June 1979. 32V was a direct port of the Seventh Edition Unix to the DEC VAX architecture.
Overview
Before 32V, Unix had primarily run on DEC PDP-11 computers. The Bell Labs group that developed the operating system was dissatisfied with DEC, so its members refused DEC's offer to buy a VAX when the machine was announced in 1977. They had already begun a Unix port to the Interdata 8/32 instead. DEC then approached a different Bell Labs group in Holmdel, New Jersey, which accepted the offer and started work on what was to become 32V.
Performed by Tom London and John F. Reiser, porting Unix was made possible due to work done between the Sixth and Seventh Editions of the operating system to decouple it from its "native" PDP-11 environment. The 32V team first ported the C compiler (Johnson's pcc), adapting an assembler and loader written for the Interdata 8/32 version of Unix to the VAX. They then ported the April 15, 1978 version of Unix, finding in the process that "[t]he (Bourne) shell [...] required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable."
UNIX/32V was released without paging virtual memory, retaining only the swapping architecture of Seventh Edition. A virtual memory system was added at Berkeley by Bill Joy and Özalp Babaoğlu in order to support Franz Lisp; this was released to other Unix licensees as the Third Berkeley Software Distribution (3BSD) in 1979. Thanks to the popularity of the two systems' successors, 4BSD and UNIX System V, UNIX/32V is an antecedent of nearly all modern Unix systems.
See also
Ancient UNIX
References
Further reading
Marshall Kirk McKusick and George V. Neville-Neil, The Design and Implementation of the FreeBSD Operating System (Boston: Addison-Wesley, 2004), , pp. 4–6.
External links
The Unix Heritage Society, (TUHS) a website dedicated to the preservation and maintenance of historical UNIX systems
Complete distribution of 32V with source code
Source code of the 32V kernel
Installation instructions and download for SimH
A MS Windows program that installs the SIMH emulator and a UNIX/32V image.
Information about running UNIX/32V in SIMH
Bell Labs Unices
Discontinued operating systems
1979 software
|
40189140
|
https://en.wikipedia.org/wiki/Nick%20Levay
|
Nick Levay
|
Nick Levay (1977–2021) also known as Rattle was an American computer security expert and hacker. He was the Chief Security Officer at the Council on Foreign Relations and other organizations such as Carbon Black and the Center for American Progress. From 2018–2021 he was the President of the NGO-ISAC, an Information Sharing and Analysis Center nonprofit serving US-based non-governmental organizations.
Early career as Rattle
Levay was born in 1977 in New Jersey, and learned at a young age that he had an affinity for hardware and liked to take things apart to see how they worked. When he was four, his parents gave him a toolbox, which he says he immediately used to take apart the clothes dryer. When he was six, his father gave him an IBM PCjr, but he found that programming didn't hold his interest. He preferred things such as radio and remote-controlled cars. When he received an Apple IIc and a 300 baud modem though, he was much more intrigued when he realized that computers could be used to communicate.
Origin of the Rattle name
When he was 12 and was talking to someone on CB Radio, he was asked for his handle, but didn't have an answer. The person on the other end spontaneously dubbed him as Rattle. Levay liked the name and continued to use it, and then when he got involved with computers and needed a pseudonym for BBSes, he kept the same name. He self-identified as a hacker, and also set up his own BBS.
When it came time for college, Levay decided that he wanted to combine his interests in hardware and music, and to study audio engineering. He moved to Nashville where he attended Middle Tennessee State University, receiving a degree in music business and communications management. He continued to connect with other hackers in the area, joining the Nashville chapter of se2600, a splinter group of the national 2600 culture. He and his friends organized an annual convention for hackers and technology enthusiasts in the Nashville area, PhreakNIC. They also became the subjects of a pioneering profile of hacker culture in 1999, "Cyber Pirates" in the Nashville Scene.
Computer security professional
In 2001, Levay and computer security expert Tom Cross co-founded the company Industrial Memetics, building an early social-networking website and blog, MemeStreams. He also began contractor work with various organizations as a computer security consultant. He was the director of global systems engineering for iAsiaWorks, building data centers in southeast Asia.
In 2007 he started as a contractor at the Center for American Progress, where he established monitoring systems and redesigned the organization's network. Over the next few years he was promoted to the Director of Technical Operations and Information Security. He left in 2013 to become Chief Security Officer at Bit9 (later VMware Carbon Black), joining after that company suffered a major data breach.
In 2015 he moved on to become the Chief Security Officer at the Council on Foreign Relations, serving there until 2018, after which he formed the NGO-ISAC, an Information Sharing and Analysis Center for Non-governmental Organizations. It is a nonprofit focused on facilitating communication among US-based NGOs and nonprofits which are being attacked from threat actors, allowing the NGOs to share threat information and coordinate things such as contingency plan exercises.
Public speaking
Active in the computer security convention community, Levay was a frequent participant at conventions such as Def Con, and presented at PhreakNIC, an annual hacker and technology convention held in Nashville. A 2011 talk there was on "Counter Espionage Strategy and Tactics".
References
External links
MemeStreams Site Information
Nick Levay at LinkedIn
1977 births
2021 deaths
People from New Jersey
People from Nashville, Tennessee
Middle Tennessee State University alumni
Chief executives of computer security organizations
|
2849306
|
https://en.wikipedia.org/wiki/Elliott%20Brothers%20%28computer%20company%29
|
Elliott Brothers (computer company)
|
Elliott Brothers (London) Ltd was an early computer company of the 1950s–60s in the United Kingdom. It traced its descent from a firm of instrument makers founded by William Elliott (1780 or 1781-1853) in London around 1804. The research laboratories were originally set up in 1946 at Borehamwood and the first Elliott 152 computer appeared in 1950.
In its day the company was very influential. The computer scientist Bobby Hersom was an employee from 1953-1954, and Sir Tony Hoare was an employee there from August 1960 to 1968. He wrote an ALGOL 60 compiler for the Elliott 803. He also worked on an operating system for the new Elliott 503 Mark II computer. The founder of the UK's first software house, Dina St Johnston, had her first programming job there from 1953–1958, and John Lansdown pioneered the use of computers as an aid to planning on an Elliott 803 computer in 1963. In 1966 the company established an integrated circuit design and manufacturing facility in Glenrothes, Scotland, followed by a metal–oxide semiconductor (MOS) research laboratory.
In 1967, Elliott Automation was merged into the English Electric company and in 1968 the computer part of the company was taken over by International Computers and Tabulators (ICT).
Origins
William Elliott was born in either 1780 or 1781 and apprenticed to the instrument maker William Blackwell in 1795. In 1804, Elliott began his own company to make drawing instruments, scales, and scientific instruments. In 1850, his two sons Charles and Fredrick joined his business. The company prospered, and manufactured a range of surveying, navigational, and other instruments. William Elliott died in 1853. In the 1850s the company began manufacturing electrical instruments, which were used by researchers such as James Clerk Maxwell and others. Charles Elliott retired in 1865, and when Frederick died in 1873 he left the business to his wife Susan.
In 1876, the company expanded to a new factory to manufacture telegraph equipment and instruments for the British Admiralty. There was increased demand for electrical switchboards for the growing electric power industry. Susan Elliott became partners with Willoughby Smith, who had significant expertise in telegraphic instruments; she was the last Elliott family member associated with the company when she died in 1880. Smith in turn brought his sons in to manage the company operations.
In 1893, the instrument making company Theilers joined Elliotts, with W. O. Smith and G. K. E. Elphinstone as managers. Elphinstone had useful connections with the British Navy. He was knighted for his contributions at Elliotts during World War I, with developments in gunnery instruments for the Navy.
In 1898, the company moved out of London to a new site in Kent. One of the main products at this site was naval gunnery tables, which were mechanical analog computers, which were manufactured until after the Second World War. Aircraft instruments became an important product line with the development of heavier than air flight; instruments such as tachometers and altimeters were vital in aviation. In 1916, the company changed its name to Elliott Brothers (London), Limited. In 1920, Siemens Brothers started purchasing shares of the company.
The end of Admiralty contracts after the war severely affected Elliott Brothers, which had not been involved in radar and electronics technology during the war. Siemens Brothers had sold their interest in the company, and a new director, Leon Bagrit, was instrumental in rebuilding and redirecting the firm into new areas.
In 1946, John Flavell Coales founded the Research Laboratories of Elliott Brothers at Borehamwood. This laboratory was the site of development of radar systems for the Government, and in 1947 produced a stored-program digital computer. By 1950 the laboratory had a staff of 450, and had developed the commercial Elliott 401 computer. In 1953, Elliott formed an "Aviation Division" at Borehamwood. In 1957, the company changed its name to Elliott Automation Ltd.
By 1966, Elliott Automation had started their own semiconductor factory at Glenrothes, Scotland. The company had about 35,000 employees. In 1967 Elliott Automation was merged into the English Electric company.
Elliott Automation
Elliott Automation (as it had become) merged with English Electric in 1967. The data processing computer part of the company was then taken over by International Computers and Tabulators (ICT) in 1968; this marriage was forced by the British Government, who believed that the UK required a strong national computer company. The combined company was called International Computers Limited (ICL). The real-time computer part of Elliott Automation remained, and was renamed Marconi Elliott Computer Systems Limited in 1969 and GEC Computers Limited in 1972, and remained at the original Borehamwood research laboratories until the late 1990s. The agreement which governed the split of computer technologies between the two companies disallowed ICT from developing real-time computer systems and disallowed Elliott Automation from developing data processing computer systems for a few years after the split. The remainder of Elliott Automation which produced aircraft instruments and control systems, was retained by English Electric.
EASAMS
EASAMS was E A Space and Advanced Military Systems (the EA was never spelled out), based in Frimley, Surrey - first at the nearby Marconi Electronic Systems plant in Chobham Road and later, when it became a limited company, at its headquarters in Lyon Way. It evolved its proprietary EMPRENT, an early program evaluation and review technique (PERT) planning system used in building North Sea oil platforms, and for the BAC TSR-2. Developments for the cancelled TSR-2 were later incorporated into multirole combat aircraft (MRCA), which finally became the Panavia Tornado.
EASAMS senior management was highly conservative, and a number of innovative engineers working on 'private venture' projects such as Hierarchical Object-Oriented Design (HOOD) and Ada language development left to form their own firms. These included Admiral Computing (which later merged with Logica), Systems Designers Ltd (which later merged with Electronic Data Systems (EDS) and later became part of Hewlett-Packard (HP) and Software Sciences (later a part of IBM UK).
EASAMS Ltd was an independent company within General Electric Company plc (GEC), founded in 1962 to provide services in system design, operational research and project management. In the 1990s EASAMS became part of Marconi Electronic Systems before losing its identity.
Computers
The following computer models were produced:
Elliott 152 (1950)
Elliott Nicholas (1952)
Elliott/NRDC 401 (1953) - prototype computer, installed in 1954 at Rothamsted Experimental Station
Elliott 153 (DF computer) (1954)
Elliott/GCHQ OEDIPUS (311) (1954)
TRIDAC (1954) three-dimensional analogue computer system for guided missile research, built for the Royal Aircraft Establishment, Farnborough.
Elliott 402 (1955)
Elliott 403 (WREDAC) (1955)
Elliott 405 (1956) (One donated by Nestle to The Forest School, Winnersh and named Nellie)
Elliott 802 (1958–1961) 6 were sold
Elliott 803 (1959) about 250 sold, mainly 803B
803A had 4 or 8 K of 39-bit words of memory and all internal data was held in one 102-bit long serial path.
803B had 4 or 8 K of 39-bit words of memory. The single data path was split into several shorter (48-bit long) serial paths to reduce instruction execution time. A hardware floating point option was available.
Elliott ARCH 1000 (1962)
Elliott 503 (1963) software compatible with 803
Elliott 900 series (1963)
For military customers there were four models of the 900 series: 920A, 920B, 920M and 920C. Only a few of the 920A were produced, rapidly obsoleted by the faster 920B. The 920M was a miniaturised version of the 920B. They were discrete transistor machines. The 920C was a later even faster derivative built using custom integrated circuits. All were shipped in robust "militarized" cases suitable for mounting in vehicles, ships and aircraft.
Civilian customers were sold versions of the 920A, 920B and 920C called Elliott 920A, 903 and 905 respectively. These were shipped in desk sized cabinets suitable for use in an office or laboratory environment.
Versions of the 920B and 920C for industrial automation were sold as Arch 900 and Arch900 respectively. These were shipped in industrial cabinets similar to those used for the civilian systems.
The 903 was a desk-sized machine popular with universities and colleges as a teaching machine, with small research laboratories as a scientific processor and also as a versatile system for use in industrial process control. It was typically equipped with 8 or 16K of core store and was predominantly a paper tape based machine but card readers, line printers, incremental graph plotters and magnetic tape systems were also available. The machine was usually programmed in symbolic assembly code, Algol or Fortran II. The civilian 920C was the 905, also in a desk-sized configuration. Some 905s had fixed head disk systems attached. A Fortran IV system was provided for the 905.
Elliott 502 (1964)
One 502 used to generate simulated radar signals for training operators of Linesman/Mediator system.
Elliott 4100 series (1966) A joint development with NCR Corporation. Elliott selling to the scientific market and NCR selling to the commercial market.
See also
BAE Systems Avionics
References
Further reading
Simon Lavington, Moving Targets: Elliott-Automation and the Dawn of the Computer Age in Britain, 1947–67, Springer, 2011.
Simon Lavington ed 'Alan Turing and his contemporaries: Building the world's first computers', BCS, 2012
External links
Elliott Computers, Simon Lavington, The Bulletin of the Computer Conservation Society, Number 42, Spring 2008, ISSN 0958-7403
Borehamwood, Staffordshire University
'Nellie' (Elliott 405) Artefact at The ICL Computer Museum
Elliott Photo Gallery, Thinking Machine
Defunct computer hardware companies
Defunct computer companies of the United Kingdom
Former defence companies of the United Kingdom
General Electric Company
International Computers Limited
Instrument-making corporations
1804 establishments in the United Kingdom
History of science and technology in the United Kingdom
Companies established in 1804
|
47732594
|
https://en.wikipedia.org/wiki/National%20Cybersecurity%20FFRDC
|
National Cybersecurity FFRDC
|
The National Cybersecurity FFRDC (NCF) is a federally funded research and development center operated by MITRE Corporation. It supports the U.S. National Institute of Standards and Technology (NIST)'s National Cybersecurity Center of Excellence. NCF is the first and, as of March 2017, only federally funded research and development center dedicated solely to cybersecurity. The National Cybersecurity FFRDC is located at 9700 Great Seneca Hwy in Rockville, Maryland.
The NCF's mission is to increase the cybersecurity of the business community by providing practical guidance, increasing the adoption rate of more secure technologies, and accelerating innovation. It supports the Department of Commerce's goal of protecting the economy.
NCF also fosters public-private collaborations to identify and solve cybersecurity threats. Through NIST's Work for Others Program, non-profits, and federal, state and local agencies can access the cybersecurity technologies and talent available at the NCF.
History
The contract to operate the FFRDC was awarded in September 2014 by NIST to the MITRE Corporation. The press release stated that "FFRDCs operate in the public interest and are required to be free from organizational conflicts of interest as well as bias toward any particular company, technology or product—key attributes given the NCCoE’s collaborative nature…The first three task orders under the contract allowed the NCCoE to expand its efforts in developing use cases and building blocks and provide operations management and facilities planning."
References
National Institute of Standards and Technology
Federally Funded Research and Development Centers
Mitre Corporation
Computer security organizations
|
63145
|
https://en.wikipedia.org/wiki/Acorn%20Archimedes
|
Acorn Archimedes
|
Acorn Archimedes is a family of personal computers designed by Acorn Computers of Cambridge, England. The systems are based on Acorn's own ARM architecture processors and the proprietary operating systems Arthur and RISC OS. The first models were introduced in 1987, and systems in the Archimedes family were sold until the mid-1990s.
ARM's RISC design, a 32-bit CPU (using 26-bit addressing), running at 8 MHz, was stated as achieving 4.5+ MIPS, which provided a significant upgrade from 8-bit home computers, such as Acorn's previous machines. Claims of being the fastest micro in the world and running at 18 MIPS were also made during tests.
Two of the first models—the A305 and A310—were given the BBC branding, with BBC Enterprises regarding the machines as "a continuing part of the original computer literacy project". Dissatisfaction with the branding arrangement was voiced by competitor Research Machines and an industry group led by a Microsoft representative, the British Micro Federation, who advocated the use of "business standard" operating systems such as MS-DOS. Responding to claims that the BBC branding was "unethical" and "damaging", a BBC Enterprises representative claimed that, with regard to the BBC's ongoing computer literacy initiatives, bringing in "something totally new would be irresponsible".
The name "Acorn Archimedes" is commonly used to describe any of Acorn's contemporary designs based on the same architecture. This architecture can be broadly characterised as involving the ARM CPU and the first generation chipset consisting of MEMC (MEMory Controller), VIDC (VIDeo and sound Controller) and IOC (Input Output Controller).
History
Having introduced the BBC Micro in 1981, Acorn had established itself as a major supplier to primary and secondary education in the United Kingdom. Attempts to reproduce the same dominance in other sectors, such as in home computing with the BBC Micro and Acorn Electron, and in other markets, such as the United States and West Germany, had been rather less successful. With microprocessor and computing technology making considerable advances in the early 1980s, microcomputer manufacturers were obliged to consider the evolution of their product lines to provide increasing capabilities and performance. Acorn's strategy for business computing and for introducing more capable machines involved a range of "second processor" expansions, with a Z80 second processor running the CP/M operating system being a product to which Acorn had committed when securing the BBC Micro contract.
Meanwhile, established platforms such as CP/M running on Z80 processors were being challenged by the introduction of the IBM PC running PC DOS and computers running a variety of operating systems on Intel processors such as the 8088 and 8086. Systems using the Motorola 68000 and other processors running the Unix operating system were also becoming available. Drawing on previous work by Xerox, Apple had launched the Lisa and Macintosh computers, and Digital Research had introduced its own GEM graphical user interface software.
Acorn's strategy ostensibly evolved to follow the lead of Torch Computers - the subject of an uncompleted acquisition by Acorn - who had already combined BBC Micro hardware with second processors (and modems) to produce their Communicator product line and the Torch 725. In 1984, Acorn presented the Acorn Business Computer (ABC) range, building around the BBC Micro architecture and offering models with different second processors and capabilities, thus responding to and anticipating the current and future trends in computing at the time. These models were tentatively favourably received by the computing press. However, with Acorn financially overstretched from its different endeavours, the company was rescued by Olivetti in 1985, with the future of the ABC range left uncertain in the anticipated rationalisation exercise that would follow. Ultimately, only one of the variants - the Acorn Cambridge Workstation - would reach the market, and in a somewhat different form to that originally planned.
The demise of the Acorn Business Computer left Acorn purely with a range of 8-bit microcomputer products, leaving the company vulnerable to competitors introducing 16-bit and 32-bit machines. The increasing dominance of MS-DOS in the business market and advocacy for the use of such software in the education sector left Acorn at risk of potential exclusion from its core market. Meanwhile, competing machines attempted to offer a degree of compatibility with the BBC Micro, enticing schools to upgrade to newer, more powerful non-Acorn machines while retaining access to software developed and purchased for Acorn's "aging machine". Acorn's ability to respond convincingly to these competitive threats was evidently constrained: the BBC Model B+ was merely a redesigned BBC Model B (with some heritage in the ABC endeavour) providing some extra memory but costing more than its predecessor, being labelled as a "stop gap" by Acorn User's technical editor, expressing frustration at opportunities not taken for cost reduction and at a general lack of technological innovation in that "Acorn has never shown interest in anything as exciting as the 68000". Disillusionment was sufficient for some software producers to signal a withdrawal from the Acorn market.
Other commentators in response to the B+ suggested that Acorn pursue the second processor strategy more aggressively, leveraging the existing user base of the BBC Micro while those users were still using the machine. In 1986, Acorn introduced the BBC Master series, starting with the Master 128 which re-emphasised second processors in the form of internally fitted "co-processors". Although a modest evolution of the existing 6502-based platform, enthusiasm for the series was somewhat greater than that for the B+ models, with dealers and software developers citing the expansion capabilities and improved compatibility over the B+. However, the competitiveness of these co-processors proved to be constrained by hardware limitations, compatibility and pricing, with a Master 512 system featuring a Master 128 and 80186 co-processor comparing unfavourably to complete IBM PC-compatible systems. The planned Master Scientific product was never launched, leaving potential customers with the existing Cambridge Co-Processor expansion as their only available option.
Attitudes towards Acorn and its technological position changed somewhat in late 1985 as news of its RISC microprocessor development effort emerged, potentially encouraging Olivetti to continue its support for the company at "a critical stage" in its refinancing of Acorn. Subsequent commentary suggested the availability of this microprocessor - the Acorn RISC Machine - in future computers as well as in an evaluation board for the BBC Micro, although such a board - the ARM Evaluation System - would only be announced in mid-1986 at a cost of £4500. Having also developed the additional support chips required to make up a complete microcomputer, Acorn was regarded as having leapt ahead of its nearest competitors.
On the eve of the announcement of Acorn's 32-bit ARM-based microcomputer products, prototypes designated A1 and A500 were demonstrated on the BBC television programme Micro Live exhibiting BASIC language performance ten times faster than a newly introduced 80386-based computer from perennial education sector rival Research Machines, with suggestions made that the machines would carry the BBC branding. Revealingly, Acorn's managing director noted, "Over the past two years we've paid the price of having no 16-bit micro."
A300 and A400 series
The Acorn Archimedes was variously described as "the first RISC machine inexpensive enough for home use", powered by an ARM (Acorn RISC Machine) chip and "the first commercially-available RISC-based microcomputer". The first models were released in June 1987, as the 300 and 400 series. The 400 series included four expansion slots (although a two slot backplane could be added to the 300 series as an official upgrade, and third parties produced their own 4-slot backplanes) and an ST-506 controller for an internal hard drive. Both models included the Arthur operating system (later replaced by RISC OS as a paid-for upgrade), BBC BASIC programming language, and an emulator for Acorn's earlier BBC Micro, and were mounted in two-part cases with a small central unit, monitor on top, and a separate keyboard and three-button mouse (the middle one used for pop-up context menus of the operating system). All models featured eight-channel 8-bit stereo sound and were capable of displaying 256 colours on screen.
Three models were initially released with different amounts of memory, the A305, A310 and A440. The 400 series models were replaced in 1989 by the A410/1, the A420/1 and A440/1, these featuring an upgraded MEMC1a and RISC OS. Earlier models which shipped with Arthur could be upgraded to by replacing the ROM chip containing the operating system. Because the ROM chips contained the operating system, the computer booted instantly into its GUI system, familiar from the Atari ST.
Despite the A310 being limited to 1 MB of RAM officially, several companies made upgrades to 2 MB and 4 MB, with the smaller upgrades augmenting the built-in RAM and the larger upgrades replacing it entirely. The 400 series were officially limited to 4 MB of RAM, but several companies released 8 MB upgrades that provided an extra MEMC chip plus 4 MB of RAM to complement an existing 4 MB of fitted RAM.
A3000
Speculation gathered pace about new machines in the Archimedes range in early 1989, with commentators envisaging a low-cost, cut-down model with 512 KB of RAM to replace the A305 in a fashion reminiscent of the Master Compact. Such speculation also raised questions about the 300 series if a low-cost model were to become available with support for up to 2 MB of RAM, given the limitations of the 300 series to a maximum of 1 MB (at least in terms of upgrade availability at the time), this potentially making the older models look "pretty stupid" according to one commentator. This speculation evolved to more accurately predict a machine with 1 MB of RAM aimed at junior or primary schools, albeit incorrectly predicting a separate disc drive unit.
Concurrently with these rumoured product development efforts, work had commenced on a successor to the Arthur operating system, initially named Arthur 2 but renamed to RISC OS 2 for launch. A number of new machines were introduced along with RISC OS 2, and in May 1989, the 300 series was phased out in favour of the new BBC A3000, with the 400 series being replaced by the improved 400/1 series models. Having been developed in a "remarkably short timescale of nine months", the machine was the "major learning vehicle" for an integrated CAD system introduced at Acorn, and it was reported that the A3000 was the first home microcomputer to use surface mount technology in its construction, with the machine being built at Acorn's longstanding manufacturing partner, AB Electronics.
The A3000 used an 8 MHz ARM2 and was supplied with of RAM and RISC OS on of ROM. Unlike the previous models, the A3000 came in a single-part case similar to the BBC Micro, Amiga 500 and Atari ST computers, with the keyboard and disc drive integrated into a base unit "slightly smaller than the Master 128". Despite the machine's desktop footprint, being larger than a simple keyboard, the case was not designed to support a monitor. Acorn offered a monitor stand that attached to the machine, this being bundled with Acorn's Learning Curve package, and PRES announced a monitor plinth and external disc drive case.
The new model sported only a single internal expansion slot, which was physically different from that of the earlier models, although electrically similar. An external connector could interface to existing expansion cards, with an external case for such cards being recommended and anticipated at the machine's launch, and one such solution subsequently being provided by PRES's expansion system. Although only intended to be upgradeable to of RAM, third-party vendors offered upgrades to along with expansions offering additional disc drive connections and combinations of user and analogue ports, both of these helping those upgrading from Acorn's 8-bit products, particularly in education, to make use of existing peripherals such as 5.25-inch drives, input devices and data logging equipment. Simtec Electronics even offered a RAM upgrade to 8 MB for the A3000 alongside other models. Hard drive expansions based on ST506, SCSI and IDE technologies were also offered by a range of vendors.
With the "British Broadcasting Corporation Computer System" branding, the "main market" for the A3000 was schools and education authorities, and the educational price of £529 – not considerably more expensive than the BBC Master – was considered to be competitive and persuasive in getting this particular audience to upgrade to Acorn's 32-bit systems. The retail price of £649 plus VAT was considered an "expensive alternative" to the intended competition – the Commodore Amiga and Atari ST – but many times faster than similarly priced models of those ranges. The Amiga 500, it was noted, cost a "not-so-bargain" £550 once upgraded to of RAM.
The relative affordability of the A3000 compared to the first Archimedes machines and the release of RISC OS helped to convince educational software producers of the viability of the platform. Shortly after the A3000's launch, one local education authority had already ordered 500 machines, aiming to introduce the A3000 to its primary schools in addition to other levels of education. Such was the success of the model that it alone had 37 percent of the UK schools market in a nine-month period in 1991 and, by the end of that year, was estimated to represent 15 percent of the 500,000 or more computers installed in the country's schools.
The appeal of the A3000 to education may also have motivated the return of Microvitec to the Acorn market with the Cub3000 monitor: a re-engineered version of the Cub monitor that was popular amongst institutional users of the original BBC Micro. (Having been "nowhere to be seen" when the Archimedes was released, Microvitec had sought to introduce its own Cubpack range of IBM PC-compatible personal computers for the education market offering some BBC BASIC compatibility, building on an estimated 80 percent market share for 14-inch colour monitors in the sector, and aspiring to launch an "interactive video workstation".)
The introduction of the A3000 also saw Acorn regaining a presence in mainstream retail channels, with a deal with high street retailer Dixons to sell the computer at "business centre" outlets, followed by agreements with the John Lewis and Alders chains. Acorn also sought to secure the interest of games publishers, hosting a conference in August 1989 for representatives of "the top 30 software houses, including Ocean, Domark, US Gold, Grand Slam and Electronic Arts".
Marketing efforts towards home users continued in 1990 with the introduction of The Learning Curve: a bundle of A3000 and application software priced at £699 plus VAT, requiring a SCART capable television, or bundled with a colour monitor and Acorn's monitor stand for £949 plus VAT. The software, having a retail value of around £200, consisted of the second, RISC OS compliant version of Acorn's First Word Plus, the hypermedia application Genesis, and the PC Emulator software, with an introductory video presented by Fred Harris. Aiming at the "pre-Christmas market" in 1990, another bundle called Jet Set offered a more entertainment-focused collection of software valued at £200 including Clares' Interdictor flight simulator, Domark's Trivial Pursuit, Superior Golf, and the Euclid 3D modelling package from Ace Computing. The price of this bundle was £747.50 which also included a television modulator developed by the bundle's distributor, ZCL, designed for use with "any TV set" and offering a "monitor quality" picture.
A540
The A540, introduced in late 1990, was an anticipated consequence of Acorn's Unix workstation development, offering the same general specification as Acorn's R260 Unix workstation (running RISC iX) but without built-in Ethernet support and running RISC OS 2 instead of Unix. It was Acorn's first machine to be fitted with the ARM3 processor as standard, supporting up to 16 MB of RAM, and included higher speed SCSI and provision for connecting genlock devices. The memory access frequency was raised to 12 MHz in the A540, compared to 8 MHz in earlier models, thus providing enhanced system performance over earlier models upgraded with ARM3 processors. The hardware design featured memory modules, each providing their own memory controller and 4 MB of RAM, and a processor module providing the ARM3 and a slot for a floating point accelerator (FPA) chip, the latter offering the possibility (subsequently unrealised) of processor upgrades. The FPA, replacing Acorn's previous floating point podule, was scheduled to be available in 1991. Much delayed, the FPA finally became available in 1993.
A5000 and A4 laptop
In late 1991, the A5000 was launched to replace the A440/1 machine in the existing product range. With the existing A400/1 series regarded as "a little tired", being largely unchanged from the A400 models introduced four years previously, the A5000 was regarded (by one reviewer, at least) as "the biggest leap forward for Acorn since the introduction of the Archimedes in 1987", introducing a combination of the ARM3 processor and RISC OS 3 for the first time in a new Acorn product, being "the machine the A540 should have been - smaller, neater, with higher capacity drives and all the same speed for about half the cost". The A5000 initially ran RISC OS 3.0, although several bugs were identified, and most were shipped with RISC OS 3.10 or 3.11.
The A5000 featured the new 25 MHz ARM3 processor, 2 or 4 MB of RAM, either a 40 MB or an 80 MB hard drive and a more conventional pizza box-style two-part case (HxWxD: 100 mm × 430 mm × 340 mm). With IBM-compatible PCs offering increasingly better graphical capabilities, they had not merely matched the capabilities of Acorn's machines, but in offering resolutions of 1024 x 768 in 16 or 256 colours and with 24-bit palettes, they had surpassed them. The A5000 (along with the earlier A540) supported the SVGA resolution of 800 x 600 in 16 colours, although the observation that "Archimedes machines have simply not kept pace" arguably remained. Earlier models could also benefit from the video performance of the A5000 via third party upgrades such as the Computer Concepts ColourCard Gold. The A5000 was the first Acorn machine to adopt the 15-pin VGA connector.
It was the first Archimedes to feature a high density capable floppy disc drive as standard. This natively supported various formats including DOS and Atari discs with formatted capacities of 720 KB and 1.44 MB. The native ADFS floppy format had a slightly larger capacity of 800 KB for double density or 1.6 MB for high density. A later version of the A5000 featured a 33 MHz ARM3, 2 or 4 MB of RAM, and an 80 or 160 MB hard drive. Particularly useful in this revised A5000 was the use of a socket for the MEMC1a chip, meaning that memory expansions beyond 4 MB could more easily replace the single MEMC1a, plugging in a card providing the two MEMC1a devices required to support 8 MB. Earlier revisions of the A5000 required desoldering of the fitted MEMC1a to provide such a socket.
In 1992, Acorn introduced the A4 laptop computer featuring a slower 24 MHz version of the ARM3 processor (compared to the 25 MHz ARM3 in the A5000), supporting a 6 MHz power-saving mode, and providing between 2.5 and 4 hours of usage on battery power. The machine featured a 9-inch passive matrix LCD screen capable of displaying a maximum resolution of in 15 levels of grey, also featuring a monitor port which offered the same display capabilities as an A5000. No colour version of the product was planned. A notable omission from the machine was a built-in pointing device, requiring users to navigate with the cursor keys or attach a conventional Acorn three-button mouse, such as the Logitech mouse bundled with the machine.
The other expansion ports available on the A4 were serial and parallel ports, a PS/2 connector for an external keyboard, a headphone connector, and support for an Econet expansion (as opposed to an Econet port itself). No other provision for expansion was made beyond the fitting of the Econet card and a hard drive. The A4 effectively fit an A5000 into a portable case - the case itself being identical to Olivetti and Triumph-Adler models - having a motherboard "roughly half the size of a sheet of A4 paper", adding extra hardware for power management and driving the LCD, the latter employing an Acorn-designed controller chip using "time-domain dithering" to produce the different grey levels. Just as the processor could be slowed down to save power, so the 12 MHz RAM could be slowed to 3 MHz, with various subsystems also being switched off as appropriate, and with power saving being activated after "more than a second or so" of user inactivity.
The launch pricing of the A4 set the entry-level model with 2 MB of RAM at £1399 plus VAT, with the higher-level mode with 4 MB of RAM and 60 MB hard drive at £1699 plus VAT. Education pricing was £1099 and £1399 respectively. Acorn foresaw educational establishments taking to the machine where existing models were needing to be moved around between classrooms or taken on field trips, although review commentary noted that "the A4 is too expensive for schools to afford in large numbers" and that contemporary Apple and IBM PC-compatible models offered strong competition for business users.
Peripherals for the A4 were eventually produced, with Acorn providing the previously announced Econet card, and with Atomwide providing Ethernet and SCSI adapters utilising the bidirectional parallel port present on the A4 (and also the A5000 and later machines). Atomwide also offered the "Hi-Point" trackball peripheral modified to work as an Acorn-compatible mouse which attached to the side of the unit.
A3010, A3020, A4000
In 1992, several new models were introduced to complement the A3000 and to replace the low-end A400 series models - the A3010, A3020 and A4000 - thus starting a transition from a range of machines of different vintages that still included the A3000 (at the low end) and the A540 (at the high end) to a range that purely featured more recently designed models including the A5000 as the high-end offering and the A4 portable. Launched alongside the Acorn Pocket Book, a distinct product based on the Psion Series 3, the machines supposedly heralded "a changed company, with new direction" and the availability of Acorn products in mainstream high street stores including Dixons, John Lewis and Argos as well as mail order catalogues.
These new models utilised the first ARM system-on-chip - the ARM250 microprocessor - a single-chip design including the functionality of an ARM2 (or ARM3 without cache), the IOC1, VIDC1a and MEMC1a chips all "integrated into a single giant chip" and fabricated using a 1 micron process. The ARM250, running at a higher 12 MHz clock frequency and used in conjunction with faster 80ns memory chips, compared to the 8 MHz of the ARM2 and the 125ns memory of the A3000, gave a potential 50% performance increase over such older systems, achieving a reported 7 MIPS.
Some early units of the A3010 did not actually utilise the ARM250, instead having a "mezzanine" board carrying the four separate devices comprising the complete chipset, with this board plugged into the motherboard in place of the ARM250. An Acorn representative indicated that this solution was pursued to meet retailing deadlines, whereas an ARM representative denied that any "serious delays" had occurred in the development of the ARM250, indicating that the mezzanine board had nevertheless been useful during the design process. Owners did not need to upgrade this board to a genuine ARM250 as it was "functionally identical" to the ARM250. One inadvertent advantage that the mezzanine board conferred was the ability to upgrade the ARM2 on the board to an ARM3, this being a popular upgrade for previous ARM2-based models that was incompatible with the ARM250. However, performing such an upgrade involves modifications to both the "Adelaide" mezzanine board and the ARM3 upgrade board employed in the upgrade. For machines fitted with an actual ARM250 processor, the closest alternative to an ARM3 upgrade in terms of performance enhancement was the Simtec "Turbo RAM" upgrade which provided 4 MB of faster RAM and gave a 40 percent improvement in overall system performance.
The machines were supplied with RISC OS 3.10 or 3.11. The A30x0 series had a one-piece design, similar to the A3000 but slightly more shallow, while the A4000 looked like a slightly slimmer A5000. The A3010 model was intended to be a home computing machine, featuring a TV modulator (for use with traditional PAL-standard televisions, SCART televisions already being supported by all of these models) and standard 9-pin joystick ports, while the A3020 targeted the primary and middle school educational markets, featuring an optional built-in 2.5-inch hard drive and a dedicated network interface socket. Meanwhile, the A4000 was aimed at the secondary education and office markets, offering a separate adjustable keyboard to comply with ergonomics regulations deemed applicable in these markets. Technically, the A4000 was almost functionally identical to the A3020, only differing in the supported hard disk size (3.5-inch in the A4000), this due to the machine's different casing. Despite the resemblance to the A5000, the A4000 along with the other models only provided a single "mini-podule" expansion slot, just as the A3000 did. All three ARM250-based machines could be upgraded to 4 MB with plug-in chips: though the A3010 was designed for 2 MB, third party upgrades overcame this.
Pricing started at just under £500 including VAT for the Family Solution bundle: an unexpanded A3010 with no monitor (to be used with a television), combined with the EasiWord word processor and one game (initially Quest for Gold). The existing Learning Curve bundle, updated to incorporate the A3010 upgraded to 2 MB of RAM in place of the A3000, included an Acorn colour monitor, the PC Emulator and a suite of Genesis hypermedia applications for a price of £799. The A4000 Home Office bundle combined the A4000 with Acorn colour monitor, Icon Technology's EasiWriter 2 "professional word processor" and Iota's Desktop Database application for a price of around £1175. The retail pricing of the A3010 was notable as making it the cheapest of any Archimedes machine sold. With games consoles gaining popularity, Acorn apparently attempted to target the "games machine plus" market with the A3010 by appealing to "the more knowledgeable, sophisticated and educationally concerned parents", this against a backdrop of established competing products having been heavily discounted: the Amiga A500 having been reduced to £299, for instance. In 1993, Commodore would subsequently offer the entry-level Amiga A600 at a price of only £199, although with Commodore "losing money on a big scale" while Acorn remained profitable, such discounting was not regarded as a threat to the A3010.
The pricing and bundles involving these machines was updated in late 1993, introducing a new Action Pack in place of the Family Solution, featuring the game Zool plus Icon Technology's StartWrite word processor. This bundle effectively reduced the price of the A3010 to £399 including VAT, reportedly making it "the cheapest Risc machine yet". The Learning Curve was revised to feature Acorn's own Advance integrated suite, together with the PC Emulator and DR DOS 6, and the bundle was also made available in conjunction with the A4000. The Home Office bundle was updated with Iota's DataPower replacing Desktop Database, and with Colton Software's PipeDream 4 and Acorn's PC Emulator being added to augment EasiWriter.
A variety of demonstration programs and an audio training tape were also provided with the bundles. At the time of these product revisions, the A3020 had become absent from related promotional material, even material aimed at the educational purchaser, although it remained in Acorn's price list presumably for the interest of institutional purchasers.
Acorn's marketing relationships with high street retailers were somewhat problematic. While outlets such as the John Lewis Partnership proved to be successful marketing partners, electrical retailer Dixons seemingly made relatively little effort to sell Acorn machines despite promising "greater opportunities" in 1993 after earlier criticism. In late 1994, Acorn appointed a sole distributor for the A3010 Action Pack and Learning Curve bundles, with the pricing of the former reduced to only £299. Persisting with the strategy that some purchasers might choose a product positioned between games consoles and traditional PC-compatibles, the distributor, ZCL, aimed to take advantage of the absence of Commodore during the Christmas 1994 season. As the Christmas 1995 season approached, Beebug purchased Acorn's "entire remaining inventory", offering the machine for £135 including VAT together with various "value-added packs".
Production of the A3020 and A4000 ceased in 1995, with remaining stocks to be sold during 1996, due to their lack of conformance with newly introduced European Union electrical and electronics regulations. This left the A7000 as Acorn's entry-level desktop system, and appropriate pricing adjustments were expected, particularly as faster versions of the A7000 were anticipated (and eventually delivered in the form of the A7000+).
Later A-series models
The A7000, despite its name being reminiscent of the Archimedes naming conventions, was actually more similar to the Risc PC, the line of RISC OS computers that succeeded the Archimedes in 1994. It lacked, however, the DEBI expansion slots and multi-slice case that characterized the Risc PC (though by removing the CDROM, a backplane with one slot could be fitted).
Software
Arthur operating system
Reminiscent of the BBC Micro upon its release, the earliest Archimedes models were delivered with provisional versions of the Arthur operating system, for which upgrades were apparently issued free of charge, thus avoiding the controversy around early ROM upgrades for the BBC Micro. In early 1988, Arthur 1.2 was delivered in an attempt to fix the deficiencies and problems in the earlier versions of the software. However, even after Arthur 1.2 had been released, a reported 100 documented bugs regarded as "mostly quite obscure" persisted, with Acorn indicating that a "new, enhanced version" of the operating system was under development.
Early applications
Following on from the release of Arthur 1.2, Acorn itself offered a "basic word processor", ArcWriter, intended for "personal correspondence, notices and short articles" and to demonstrate the window, menu and pointer features of the system, employing built-in printer fonts for rapid printed output. The software was issued free of charge for registered users, although Acorn indicated that it would not produce a "definitive" word processor for the platform, in contrast to the BBC Micro where the View word processor had been central to Acorn's office software range. However, Acorn did also announce a port of the 1st Word package, First Word Plus, for the platform. ArcWriter was poorly received, with window repainting issues demonstrated as a particular problem, and with users complaining of "serious bugs". Although taking advantage of the Arthur desktop environment and using anti-aliased fonts, complaints were made about "blurred and smudged" characters and slow display updates when changing fonts or styles on low-memory machines like the A305. An early competitor, Graphic Writer, was received more favourably but provided its own full-screen user interface. Neither were regarded as competitive with established products on other platforms.
Several software companies immediately promised software for the Archimedes, most notably Computer Concepts, Clares and Minerva, with Advanced Memory Systems, BBC Soft and Logotron being other familiar software publishers. Autodesk, Grafox and GST were newcomers to the Acorn market. However, in early 1988, many software developers were reportedly holding off on releasing software for the Archimedes until the release of a stable operating system, with Acorn offering to lend Arthur 1.2 to developers. Claims had been made of confusion amongst potential purchasers of the machine caused by the lack of available software, with Acorn having pursued a strategy of launching the machine first so that independent software developers might have hardware to work with. In order to make the Archimedes more attractive to certain sectors, Acorn announced a £250,000 investment in educational software and indicated a commitment to business software development. Alongside First Word Plus, the Logistix spreadsheet-based business planning package was also commissioned by Acorn from Grafox Limited as a port to the platform. Autodesk released AutoSketch for the Archimedes in 1988, having launched the product in March the same year. Priced at £79 plus VAT, it offered the precision drawing functionality familiar from AutoCad but with "none of the frills" that made the latter product professionally suitable for various markets at pricing that could exceed £2500. On the Archimedes, AutoSketch was reported to run at about five times the speed of a "standard PC-compatible machine".
Although Acorn had restricted itself to supporting the use of its View word processor under BBC emulation on the Archimedes, View Professional - the final iteration of the View suite on Acorn's 8-bit computers - had been advertised as a future product in June 1987 for November availability. View Professional, like the View series, had been developed for Acorn by Mark Colton, and a company - Colton Software - delivered the successor to this product as PipeDream for the Cambridge Computer Z88. In mid-1988, Colton Software announced PipeDream for the Archimedes, priced at £114, following on from the announcement of a version for MS-DOS, establishing a long history of product development for the platform, leading to PipeDream 4 in 1992, followed by PipeDream's eventual successor, Fireworkz, in 1994.
Much early software had consisted of titles converted from the BBC Micro, taking advantage of a degree of compatibility between the different series of machines, with Computer Concepts even going as far as to produce a ROM/RAM hardware expansion for use with the company's existing BBC Micro series products, and Acorn also offering such an expansion alongside a BBC-compatible interfacing expansion. Another element of Acorn's early marketing strategy for the Archimedes was to emphasise the PC Emulator product which was a software-based emulator for IBM PC-compatible systems based on the 8088 processor running "legal MS-DOS programs". Alongside this, plans were also made for the launch of a podule (peripheral module) hardware expansion providing its own 80186 processor, a disk controller and connector for a disk drive.
The PC Emulator in its initial form shipped with MS-DOS 3.21 and required a system with 4 MB of RAM to be able to provide the "full" 640 KB of RAM for DOS programs, with early versions of Arthur only providing 384 KB to DOS on 1 MB systems, but with Arthur 1.2 aiming to provide the more usable 512 KB to DOS on such systems. The emulator was described as having "very few compatibility problems" and was reported by diagnostic utilities as providing an 80188-based system, but the performance of the emulated system was regarded as slow. Acorn reportedly acknowledged this by indicating the imminent availability of "an 80186 co-processor". The podule expansion (or "co-processor") was subsequently postponed in early 1988 (and ultimately cancelled), with Acorn indicating that its price of £300 would have been uncompetitive against complete PC systems costing as little as £500, and that the hardware capabilities to be offered, such as the provision of CGA graphics, would be likely to become outdated as the industry moved to support EGA and VGA graphical standards.
Commentators were disappointed with the incoherent user interface provided by the software platform, with "Logistix looking like a PC, First Word slavishly copying GEM" and "101 other 'user interfaces'" amongst the early offerings. The result was the lack of a "personality" for the machine which risked becoming a system that would "never look as easy or as slick as the Mac". Alongside the introduction of visual and behavioural consistency between applications, personal computer user environments had also evolved from running a single application at a time, moved beyond "desk accessories" (or pop-up programs), normalised the practice of switching between applications, and had begun to provide the ability to run different applications at the same time, with the Macintosh having already done so with its MultiFinder enhancement. Computer Concepts, having begun development of various new applications for the Archimedes, was sufficiently frustrated with Arthur and its lack of "true multi-tasking" that it announced a rival operating system, Impulse, intended to host those applications on the machine.
RISC OS
Remedying various criticisms of the early operating environment, Acorn previewed RISC OS (or, more formally, RISC OS 2) in late 1988 and announced availability for April 1989. Internally at Acorn, the realisation had dawned that multitasking had become essential in any mainstream computing environment where "the user is likely to use lots of small applications at once, rather than one large application alone", with other graphical environments such as Hewlett Packard's NewWave and IBM's Presentation Manager being considered as the contemporary competition.
Reactions to the upgraded operating system were positive and even enthusiastic, describing RISC OS as giving software developers "the stable platform they have been waiting for" and "a viable alternative to the PC or Mac", also crediting Acorn for having improved on the original nine-month effort in developing Arthur in the following twelve months leading up to the unveiling of RISC OS. For a modest upgrade cost of £29, users received four ROM chips, three discs including several applications, and documentation.
New facilities in RISC OS included co-operative multitasking, a task manager to monitor tasks and memory, versatile file management, "solid" window manipulation ("the whole window moves - not just the outline"), and adaptive rendering of bitmaps and colours, using dithering where necessary, depending on the nature of the selected screen mode. A common printing framework was introduced, with dot-matrix and PostScript printer drivers supplied, with such drivers available for use by all desktop applications. Amongst the selection of applications and tools included with RISC OS were the Draw graphics editor, featuring vector graphics editing and rudimentary manipulation of text (using the anti-aliased fonts familiar from Arthur) and bitmaps, the Edit text editor, the Paint bitmap editor, and the Maestro music editor.
With RISC OS available, Acorn launched new and updated applications to take advantage of the improved desktop environment. One of these, deferred until after the launch of RISC OS, was Acorn Desktop Publisher, a port of Timeworks Publisher, which introduced a significant improvement to the anti-aliased font capabilities through a new outline font manager, offering scalable fonts that were anti-aliased on screen but rendered at the appropriate resolution when printed, even on dot-matrix printers. First Word Plus was also updated to support the new RISC OS desktop environment, albeit retaining its own printer drivers, being positioned as complementing Acorn Desktop Publisher whose emphasis was on page layout as opposed to textual document creation.
As part of an effort to grow the company's share of the home market, Acorn introduced a bundle called The Learning Curve, initially featuring the A3000, optional monitor and a set of applications (First Word Plus, the PC Emulator, and Genesis). This bundle was enhanced later in 1990 to attract buyers to the A420/1, adding Acorn Desktop Publisher and some additional Genesis applications. Acorn's document processing applications also began to see broader competition around this time, with Impression from Computer Concepts and Ovation from Beebug also providing competitive solutions for desktop publishing. Also in 1990, PipeDream 3 became the first version of the PipeDream integrated suite, descended from Acorn's View Professional but developed and marketed by Colton Software, to be made available for the RISC OS desktop.
The launch of the A5000 in late 1991 brought a new version of RISC OS to the market: RISC OS 3. This delivered a range of enhancements to the operating system including multitasking filer operations (meaning that file copying, moving and deletion no longer took over the computer), support for reading and writing DOS format discs, the provision of various bundled applications (Alarm, Calc, Chars, Configure, Draw, Edit, Help and Paint), commonly used outline fonts, and software modules in ROM (instead of needing to be loaded from accompanying floppy discs into RAM), the removal or raising of limits on windows and tasks, the ability to "iconise" windows and pin them to the desktop background (or pinboard), desktop session saving and restoring, screen blanking support, and other printing and networking improvements. Providing the bundled applications and other resources in ROM saved an estimated 150 KB of workspace, thus being beneficial to users of 1 MB machines.
The bundled RISC OS 3 applications were enhanced from their RISC OS 2 versions in various general ways, such as the introduction of keyboard shortcuts, but also with new, specific features. The printing system was also updated to support multiple printers at once, but in this first version of RISC OS 3 background printing was still not supported. The "most obviously improved" application was Draw, acquiring new features including multiple levels of "undo" and "redo" operations, rotated text (benefiting from an updated outline font manager), graduated fills, shape interpolation (or in-betweening) support, and built-in support for converting text into paths. Edit gained improved formatting and searching support plus transparent BASIC program editing facilities. One somewhat visually obvious improvement delivered in RISC OS 3 was the use of "3D window borders" or, more accurately, dedicated bitmaps for window furniture, allowing different desktop styling effects. The appearance of the desktop would eventually shift towards Acorn's "NewLook" desktop theme, previewed in late 1993.
In late 1992, RISC OS 3 was itself updated, becoming RISC OS 3.1 (as opposed to the initial RISC OS 3.0 provided with the A5000) and being made available for all existing Archimedes machines, although A300 series and the original A400 series machines needed a hardware modification to be able to accept the larger 2 MB ROMs, employing a special daughterboard. Various bugs in RISC OS 3.0 were fixed and various other improvements made, making it a worthwhile upgrade for A5000 users. Notably, support for background printing was introduced. VAT-inclusive introductory pricing for the upgrade was £19 for RISC OS 3.0 users and £49 for RISC OS 2.0 users, with the upgrade package including ROMs, support discs and manuals. The non-introductory price of the upgrade was stated as being £89.
The limitations of RISC OS became steadily more apparent, particularly with the appearance of the Risc PC and the demands made on applications taking advantage of its improved hardware capabilities (although merely highlighting issues that were always present), and when contrasted with the gradually evolving Windows and Macintosh System software, these competitors offering or promising new features and usability improvements over their predecessors. Two fundamental deficiencies perceived with RISC OS were a lack of virtual memory support, this permitting larger volumes of data to be handled by using hard disc storage as "slow, auxiliary RAM" (attempted by application-level solutions in certain cases), and the use of cooperative multitasking as opposed to preemptive multitasking to allow multiple applications to run at the same time, with the former relying on applications functioning correctly and considerately, and with the latter putting the system in control of allocating time to applications and thus preventing faulty or inconsiderate applications from hanging or dominating the system. Problems with the storage management and filing systems were also identified. In 1994, the FileCore functionality in RISC OS was still limited to accessing 512 MB of any hard drive, with this being barely larger than the largest supplied Risc PC hard drive at the time. Filing system limitations were also increasingly archaic: 77 files per directory and 10-character filenames, in contrast to more generous constraints imposed by the then-"imminent" Windows 95 and then-current Macintosh System 7 release.
Although an update to the FileCore functionality was delivered in 1995, initially to members of Acorn's enthusiast community, providing support for larger storage partitions (raising the limit to 128 GB), other improvements, such as those providing support for the use of longer filenames, were still only provided by third parties. With adverse financial results and a restructuring of the company in late 1995, Acorn appeared to be considering a more responsive strategy towards customer demands, potentially offering rebadged PC and Mac products alongside Acorn's existing computers, while cultivating a relationship with IBM whose PowerPC-based server hardware had already been featured in Acorn's SchoolServer product running Windows NT. In the context of such a relationship, the possibility was raised of "bolting a RISC OS 'personality' on top of a low level IBM-developed operating system" to address RISC OS's deficiencies and to support virtual memory and long filenames. At this time, IBM was pursuing its Workplace OS strategy which emphasised a common operating system foundation supporting different system personalities.
PC Emulation
In mid-1991, the PC Emulator was eventually updated to work as a multitasking application on the RISC OS desktop, requiring 2 MB of RAM to do so, and supporting access to DOS files from the RISC OS desktop filer interface. The emulator itself permitted access to CD-ROM devices and ran MS-DOS 3.3 with a special mouse driver to permit the host machine's mouse to behave like a Microsoft bus mouse. CGA, EGA, MDA and partial VGA graphics support was implemented, and the emulated system could run Windows 3. The product cost £99, with an upgrade costing £29 for users of previous versions. Although technically compatible with 1 MB systems, and with 2 MB of RAM considered necessary for multitasking operation, offering facilities to capture the emulated display as a bitmap or as text, 4 MB was recommended to take advantage of such features, along with a high resolution multiscan monitor and VIDC enhancer to be able to display most of the emulated screen without needing to scroll its contents. An ARM3 processor was considered essential for "a workable turn of speed", this giving performance comparable with a 4.77 MHz 8086 PC-XT system.
Regarded as a "programming wonder", the PC Emulator was nevertheless regarded as being "too slow for intensive PC use". Shortly after the introduction of the updated PC Emulator, a hardware PC compatibility solution was announced by Aleph One, offering a 20 MHz 80386SX processor and VGA display capability, effectively delivering Acorn's envisaged PC podule in updated form. A low-cost alternative to Acorn's PC Emulator called FasterPC became available in 1993, priced at around £20 but with DOS not included (to be provided by the user at an estimated additional cost of £50). The software provided PC emulation outside the desktop environment, with considerable performance benefits claimed relative to Acorn's product. Regarded as being "considerably faster than the Acorn emulator when displaying graphics", with a two-times speed improvement observed for various tested programs, the product was considered appropriate for gaming, albeit at lower than VGA resolution. It was also unable to run Windows 3.1: the Aleph One PC expansion cards being the only solutions able to do so at that time.
Bitmap image editing
Having considerably improved graphical capabilities compared to those provided with Acorn's 8-bit machines, a number of art packages were released for the Archimedes to exploit this particular area of opportunity, albeit rather cautiously at first. One of the first available packages, Clares' Artisan, supported image editing at the high resolution of but only in the 16-colour mode 12, despite the availability of the 256-colour mode 15 as standard. Favourably received as being "streets ahead" of art software on the BBC Micro, it was considered as barely the start of any real exploitation of the machine's potential. Typical of software of the era, only months after the launch of the machine, Artisan provided its own graphical interface and, continuing the tradition of BBC Micro software, took over the machine entirely even to the point of editing the machine configuration and restoring it upon exiting. Clares released a successor, Artisan 2, two years later to provide compatibility with RISC OS, replacing special-purpose printer support with use of the system's printer drivers, but not making the software a desktop application. The program's user interface deficiencies were regarded as less forgivable with the availability of a common desktop interface that would have addressed such problems and made the program "easier to use and a more powerful program as a result".
Clares also produced a 256-colour package called ProArtisan, also with its own special user interface (despite the impending arrival of RISC OS), costing considerably more than its predecessor (£170, compared to £40 for Artisan), offering a wider range of tools than Artisan including sprays, washes and path editing (using Bézier curves) to define areas of the canvas. Although regarded as powerful, the pricing was considered rather high from the perspective of those more familiar with the 8-bit software market, and the user interface was regarded as "only just bearable". Competitors to ProArtisan during 1989 included Art Nouveau from Computer Assisted Learning and Atelier from Minerva. Both of these programs, like ProArtisan, ran in full-screen mode outside the desktop, used the 256-colour mode 15, and offered their own interfaces. Atelier, however, was able to multi-task, providing the ability to switch back to the desktop and find applications still running and accessible. Unlike other contemporary art programs, it also took advantage of the system's own anti-aliased fonts. One unusual feature was the ability to wrap areas of the canvas around solid objects. Both programs also offered similar path editing facilities to ProArtisan, with it being noted that Art Nouveau's limitations in this regard might be remedied by using the support already present in RISC OS and provided by the Draw application functionality, as ProArtisan 2 eventually demonstrated.
In 1989, RISC OS was provided with the Paint application on one of the accompanying application discs. It featured a multi-document, desktop-based interface with a range of elementary painting and drawing tools, also allowing images to be created in arbitrary sizes for any of the display modes, even permitting editing of images in display modes with different numbers of colours, albeit with limitations in the representation of image colours when the desktop mode had fewer colours available. Along with its companion applications, Paint supported the system's anti-aliased fonts and printer driver framework, and by embracing the system's user interface conventions, images could be exported directly to applications such as Draw by dragging an image's file icon from the save dialogue directly to the target application.
Despite a trend of gradual adoption of desktop functionality, in 1990, Arcol from ExpLAN offered a single-tasking, full-screen, 256-colour editing experience using the lower resolution mode 13, supporting only bitmap fonts. Aimed at educational users, its strengths apparently included real-time transformation of canvas areas, rapid zooming, and the absence of limitations on tools when zooming: arguably demonstrating more a limitation of contemporary packages with their own peculiar interfaces. ExpLAN subsequently released Arcol Desktop, although the "desktop" label only indicated that the program would multi-task with desktop applications and offer some desktop functionality, particularly for the loading and saving of images: the program still employed a special full-screen user interface, albeit allowing other 256-colour modes to be used, with the mode of the original being the default. With expectations having evolved with regard to user interfaces and desktop compatibility, this updated product was judged less favourably, with the partitioning of functionality between the desktop and painting interface being "awkward" and the behavioural differences "confusing", leaving the product looking "rather dated" when compared to its modern contemporaries.
In early 1991, in the context of remarks that, at that point in time, the Paint application bundled with RISC OS was "the only true Risc OS art program" operating in the desktop and not restricting users to specific display modes, Longman Logotron released Revelation, an application running in the desktop environment, providing interoperability with other applications through support for the platform's standard Sprite and Drawfile formats, with vector graphics import being provided by a companion tool, and utilising the system's printing framework. Apart from observations of limited functionality in some areas, one significant limitation reminiscent of earlier products was the inability to change display mode without affecting the picture being edited. This limitation was not convincingly removed in the second version, sold as Revelation 2 around a year later, with colours being redefined when selecting a 16-colour display mode while editing a 256-colour image, preserving the inability to edit 256-colour images in 16-colour modes. A further version update was delivered as the Revelation ImagePro product, being considered "the best art package that I have used on the Archimedes" by Acorn User's graphics columnist in late 1992.
In response to the evolving competitive situation and market expectations, Clares released ProArtisan 2, a successor to its earlier product, in late 1993 as "a completely new program" with some familiar features from the company's earlier products but offering display mode independence, 24-bit colour support (including support for ColourCard and G8/G16 graphics cards), multi-document editing, and desktop compliance. The path editing tools familiar from its predecessor were supported using functionality from Acorn's Draw application, and the image enhancement capabilities had also "undergone a major revamp". At a reduced price of £135, and with use of the RISC OS desktop contributing to overall ease of use, the package was considered by one reviewer as "the best art package around at the moment for the Archimedes".
Late in the Archimedes era, this being prior to the release of the Risc PC, a consensus amongst some reviewers formed in recommending Revelation ImagePro and ProArtisan 2 as the most capable bitmap-based art packages on the platform, with Arcol Desktop and First Paint also being reviewer favourites. With the release of the Risc PC and A7000, offering improved hardware capabilities and built-in support for 24-bit colour, the art package market changed significantly. New packages supplanted older ones as recommendations, some from new entrants within the broader Acorn market (Spacetech's Photodesk, Pineapple Software's Studio24), others coming from established vendors (Clares' ProArt24 and Longman Logotron's The Big Picture), and still others from beyond the Acorn market (Digital Arts' Picture). However, the platform and hardware requirements of such packages were generally beyond Archimedes era machines, demanding 8 MB of RAM or 24-bit colour display modes (using 2 MB of dedicated video RAM) in some cases. A notable exception was Studio24 which, having been significantly updated in its second version, was reportedly "completely compatible" with the earlier machines.
Vector image editing
RISC OS was supplied with the Draw application, offering a range of tools for creating diagrams and pictures using vector graphics primitives, also permitting the incorporation of bitmap images and text into documents, and managing the different elements of documents as a hierarchy of objects. A significant capability provided by the application (and exploited by art packages) was that of Bézier curve editing, allowing shapes with smooth curves to be created, rendered and printed.
The file format used by Draw was documented and extensible, and a range of tools emerged to manipulate Draw files for such purposes as distorting or transforming images or objects within images. Amongst them was the Draw+ (or DrawPlus) application which defined other object types and also added other editing features, such as support for multiple levels or layers in documents. DrawPlus became available in 1991 and was released "at nominal cost" via public domain and shareware channels. The author of DrawPlus, Jonathan Marten, subsequently developed an application called Vector, released by an educational software publisher, 4Mation, in early 1992. Described as "effectively an enhanced Draw", the program improved on Draw's text handling by allowing editing of imported text, continued DrawPlus's support for layers and object libraries, provided efficient handling of replicated or repeated objects, and introduced masks that acted as "windows" onto other objects. Priced at £100, even for site-wide usage, the software was considered "ideal... for technical drawing, to graphic design and even limited desktop publishing". A version of Draw was also developed for Microsoft Windows by Oak Solutions.
A significant introduction to the Archimedes' software portfolio came with the release of ArtWorks by Computer Concepts in late 1992. Described in one preview as "perhaps the easiest to use, but most advanced graphic illustration package, on any personal computer today", ArtWorks provided an object-based editing paradigm reminiscent of Draw, refining the user interface, and augmenting the basic functionality with additional tools. A notable improvement over Draw was the introduction of graduated fills, permitting smooth gradients of colour within shapes, employing dithering to simulate a larger colour palette. The image rendering engine was also a distinguishing feature, offering different levels of rendering detail, with the highest level introducing anti-aliasing for individual lines. Aimed at professional use, and complementing its sibling product, the Impression desktop publishing application, 24-bit colour depths and different colour models were supported. A key selling point of the package was its rendering speed, with it being reported that redraw speeds were up to five times faster in ArtWorks on an ARM3-based machine than those experienced with CorelDRAW running on a 486-based IBM PC-compatible system. ArtWorks would have broader significance as predecessor to the Xara Studio application and subsequent Windows-based products.
Document processing and productivity
Although document processing and productivity or office software applications were addressed by a few packages released in the Arthur era of the Archimedes, bringing titles such as First Word Plus, Logistix, and PipeDream, it was not until the availability of RISC OS that the Archimedes would see the more compelling software developed for the platform being delivered, with Acorn even delaying its own Desktop Publisher to take advantage of this substantial upgrade to the operating system.
Alongside Acorn Desktop Publisher, Computer Concepts' "document processor" Impression and Beebug's Ovation provided a small selection of solutions in the realm of desktop publishing. Acorn pursued the publishing industry with software and hardware system bundles, with Impression typically featuring prominently, even in the era of the Archimedes' successor, the Risc PC. Ovation was eventually succeeded by Ovation Pro in 1996, offering stronger competition to Impression Publisher - itself the professional package in the range that had developed from Impression - and to industry-favoured applications such as QuarkXPress.
Amongst a variety of word processor applications, one enduring product family for the platform was developed by Icon Technology who had already released a word processor, MacAuthor, for the Apple Macintosh. This existing product was ported to RISC OS and released as EasiWriter in 1991, fully supporting the outline fonts and printing architecture of the host system. Icon followed up to EasiWriter with an enhanced version ("EasiWriter's big brother") in 1992, TechWriter, featuring mathematical formula editing. Both products were upgraded to provide mail-merge capabilities - a noted deficiency of the first release of EasiWriter - and both provided convenient table editing, with TechWriter also offering automatic footnote handling, being promoted as "a complete package for producing academic and technical documents". Upgraded "Professional" editions of EasiWriter and TechWriter were released in 1995, with the latter adding the notable feature of being able to save documents in TeX format.
Given the platform's presence in education, various educational word processing and publishing applications were available. Longman Logotron supplied a "cost-effective introduction to DTP" in the form of FirstPage, retailing at £49 plus VAT with "unlimited" educational site licences costing up to £190. Targeting machines with only 1 MB of RAM, various traditional word processing features such as a spelling checker and integrated help were omitted, but as a frame-based document processor it was considered "excellent value for money" when compared to the pricing and capabilities of some of its competitors, even appealing to the home market. Similarly, Softease's "object-based" document processor, Textease, also had potential appeal beyond the educational market, freeing the user from having to design page layouts using frames, instead permitting them to click and type at the desired position or to drag and drop graphical objects directly into the page, providing a user interface paradigm reminiscent of the Draw application provided with RISC OS. Document layout capabilities were nevertheless available, supporting multiple column layouts, as were the traditional features such as spellchecking and integrated help absent from FirstPage. Pricing was even more competitive at around £30, or £40 with spellchecking support.
Aside from the hybrid word processor and spreadsheet application, PipeDream, being released in versions 3 and 4 for the RISC OS desktop environment, Colton Software released a standalone word processor, Wordz, in 1993, with plans for companion applications and a degree of integration between them. The first of these companion applications was Resultz, and the two applications were combined to make Fireworkz, itself incorporating the editing capabilities of both applications within a single interface, offering the ability to combine textual and spreadsheet data on the same page within documents. Colton subsequently expanded the family in 1995 with the Recordz database product, combining it with the existing Fireworkz functionality to make the Fireworkz Pro product, this bringing it into direct competition with Acorn's Advance and Minerva's Desktop Office suites, but ostensibly offering a much deeper level of integration than those competitors. PipeDream itself was later updated to version 4.5, conforming more closely to the RISC OS look and feel, being initially offered as an upgrade for users of version 4.0.
Acorn's own interest in developing applications led it to initiate work on the Schema spreadsheet application, only to disengage from application development and to transfer the product to Clares who, with assistance from the originally commissioned developers, brought the product to market. Despite its origins as one component in an application suite that was never delivered as envisaged, a cut-down version of Schema 2 was later incorporated into Acorn's Advance application suite alongside variants of Computer Concepts' Impression Junior and Iota Software's DataPower. Schema 2 itself was enhanced with a "powerful macro language" and released in 1994.
In the spreadsheet category, Longman Logotron's Eureka, released in 1992, provided robust competition to Schema and PipeDream, seeking to emulate Microsoft Excel in terms of functionality and user interface conventions. The interoperability benefits of the updated product, Eureka 2, were later given as a reason for Acorn to adopt the software internally, acquiring a 300-user site licence and thus allowing its employees to convert "substantial spreadsheet data which needed converting from Lotus 1-2-3". Updated again as Eureka 3, with new features remedying "what was badly missing in the earlier version", but with the manual regarded as inadequate and with online help still absent from the application, the application was nevertheless regarded as the most powerful of the platform's principal spreadsheet offerings, attempting to be "the Excel of the Acorn world".
A number of database applications were made available for the Archimedes, with Minerva Software following up from early applications on the system in early 1990 with the RISC OS desktop-compliant Multistore: a relational database with a graphical "record card" interface and report generation functionality. A broadly similar approach, albeit without any claimed "relational" capabilities, was offered by Digital Services' Squirrel database manager software, emphasising customisation of the presentation of data and reporting, but also introducing a flowchart-based method of querying, this feature causing one reviewer to regard the product as "the most innovative database manager on the Archimedes" with its usability being comparable to FileMaker on the Apple Macintosh.
Aimed at the education market, with a focus more on "computerised data handling" than data management, Longman Logotron's PinPoint framed the structuring and retention of data around a questionnaire format, with a form editor offering "DTP-style facilities", and with data entry performed interactively via the on-screen questionnaire. Some analysis and graphing capabilities were also provided. A version of PinPoint would eventually be made available for Windows, ostensibly aimed at market research as opposed to education, as its producer attempted to broaden its audience and availability for different platforms. Also emphasising a desktop publishing style of presentation was Iota Software's DataPower, employing these facilities to customise record entry to "make data collection as much like form-filling as possible" and in the reporting functionality of the software.
In 1993, Longman Logotron introduced S-Base, a programmable database offering the possibility of customised database application development. Described as "a more disciplined, less graphical approach to database design", the software enforced a degree of discipline around data type and table definition, but it also retained various graphical techniques to design forms for interaction with the database. Building on such foundations, programs could be written in a language called S to handle user interaction, graphical user interface events, and to interact with data in the database. Being compared to the contemporary DOS-based Paradox software, it was regarded as having more of an emphasis on "database applications" than actual databases, also being considered as similar to the contemporary RISC OS application, Archway, as a kind of "application generator" tool.
DataPower, S-Base and Squirrel were all subsequently upgraded, S-Base 2 being enhanced with features to simplify the setting up of applications and consequently being regarded as "without doubt the most powerful database management system available for the Archimedes" due to its programmable nature, Squirrel 2 gaining relational capabilities and being recommended for its "amazing flexibility" and for its searching and sorting functionality, with DataPower being recommended more for "the majority of users" for its usability and "attractive graphs and reports".
Despite spreadsheet and database applications offering graphing capabilities, dedicated applications were also available to produce a wider range of graphs and charts. Amongst these were Chartwell from Risc Developments and the Graphbox and Graphbox Professional packages from Minerva Software. Arriving somewhat later than these packages, being released by Clares in January 1994, Plot also sought to cater for mathematical and educational users by offering support for function plotting, this having been largely ignored by the existing packages which tended "to be based on producing bar and pie charts from tables of figures".
Full-motion video
With the introduction of CD-ROM and the broader adoption of multimedia, Acorn announced a full-motion video system called Acorn Replay in early 1992, supporting simultaneous audio and video at up to 25 frames per second in the RISC OS desktop or in "a low resolution full screen mode". Unlike certain other full-motion video technologies, Replay offered the ability to read compressed video data from mass storage in real time and to maintain a constant frame rate, all on standard computing hardware without the need for dedicated video decoding hardware. The compression techniques employed by Replay reportedly offered "compression factors of between 25 and 40" on the source video data, with the software decompression requiring a computer with 2 MB of RAM or more.
Given a slower access medium such as CD-ROM or floppy disk, video could be played back at up to 12.5 frames per second, with up to 25 frames per second from a hard disk. One 800 KB floppy disk could reportedly hold 12 seconds of video. In the introductory phase of the technology, support for Replay files was quickly introduced into hypermedia applications such as Genesis and Magpie, with software developers being the primary audience for the creation of content, largely due to the expense of the equipment required to capture and store large volumes of video data. Software developers would engage the services of a suitably equipped company to convert source material to digital form, with the Replay software then used to process the video frame by frame, employing image compression techniques and "a form of Delta compression", ultimately producing a movie file.
Acorn's introduction of Replay prompted comparisons with Apple's QuickTime system which was already broadly available to users of Macintosh systems. Replay's advantages included the efficiency of the solution on existing hardware, with even an entry-level A3000 upgraded to 2 MB of RAM being able to handle 2 MB of data per second to achieve the advertised 12.5 frames per second playback. In contrast, a Macintosh system with 2 MB of RAM was reportedly unable to sustain smooth video playback, although audio playback was unaffected by the dropped video frames, whereas a 4 MB system could achieve 15 frames per second from a CD-ROM drive, although such a system was more expensive than Acorn's ARM3-based systems that could more readily achieve higher frame rates. QuickTime was also reported as only able to play video smoothly at 1/16th of the size of the screen, also favouring 32,000 colour display modes that were available on Macintosh systems with 68020 or faster processors. One disadvantage of Replay on the Acorn systems was the limitation of playback to 256 colours imposed by the built-in video system.
Educational software and resources providers saw the potential of Replay to deliver interactive video at a more affordable price than existing Laservision content, although it was noted that, at that time, Laservision still provided "the best quality, full-screen, moving image to date". Opportunities were perceived for making compilations of video clips available on CD-ROM for multimedia authoring purposes, although educational developers felt that the true value of the technology would be realised by making video like other forms of information, permitting its use in different contexts and works and thus offering children "control over the media". Educators also looked forward to more accessible authoring possibilities, with children being able to record, edit and incorporate their own video into their projects. However, the expense associated with handling video data, with the storage of one minute of video estimated at 60 MB, combined with the expense of commercial video digitisation, estimated at £100 per minute of video, meant that such possibilities would remain inaccessible for most users at that time. Indications that this situation would change were present in the QuickTime market, with it already supporting the creation of short movies in conjunction with video digitiser cards and editing tools such as Adobe Premiere.
Support for video authoring on the desktop emerged in 1993 with the Replay DIY product from Irlam Instruments: a single-width podule suitable for A540 and A5000 computers with 2 MB of RAM or more, these being the only models available at the time with the necessary performance. The podule accepted analogue video input from video cameras, recorders and laserdisc players, allowing the video to be previewed in a window on the desktop. While recording, no preview would be shown, and the hardware would digitise the audio and video input, transfer the data to the computer's memory, and this would then be sent straight to a hard disk. At its introduction, the video quality was limited to "normal Arm2 Replay, that is 256-colour, 160x128 pixels at 12.5 frames per second", although an upgrade to capture 25 frames per second was anticipated. Uncompressed video occupied around 21 MB per minute, but processing of such video using the provided Acorn Replay compression software would bring the size of the resulting video down to around 4 or 5 MB per minute. Compression was, however, relatively slow, since the compression scheme was asymmetric, meaning that decompression was fast enough to facilitate playback in real time, but compression could take "a few minutes for every few seconds of video". Nevertheless, the possibilities of video capture were predicted to "generate and maintain immense interest in the classroom, or even at home", and the digitiser's low cost (at £250 plus VAT) together with low-cost editing software such as Uniqueway's Empire (at £50 plus VAT) was regarded as surprising, possibly in light of the high cost of services previously needed to achieve similar results.
Further developments in the video authoring domain were brought to the platform by Eidos, who had developed an "offline non-linear editing system" around the Archimedes in 1989, involving the digitisation of source video and its storage on hard disks or magneto-optical media for use with editing software. Such software would be used to produce an "edit schedule list" based on editing operations performed on the digitised, "offline" video, and these editing details would subsequently be applied in an "online" editing session involving the source video, this typically residing on "linear" media such as tape. To support the more convenient offline editing environment, a highly efficient symmetric compression scheme known as ESCaPE (Eidos Software Compression and Playback Engine) had been devised, offering movie sizes of around 1.5 MB per minute. To remedy the time-consuming process of using Acorn's Replay compression software with the Replay DIY product, this being a consequence of the "Moving Lines" compression scheme emphasised by Replay at that time, Eidos introduced its own compression software for Replay DIY based on ESCaPE and given the same name. Together with the Eidoscope software, based on Eidos' professional Optima software, it was claimed that "no other computer platform has anything to match in terms of convenience and sheer usability" and that these developments would "encourage a lot more Archimedes users to have a go at making movies". In 1995, Computer Concepts offered a bundle featuring Eidoscope and the company's Eagle M2 "multimedia card" which featured audio and video capture, improved audio playback, and MIDI ports. Aimed at non-professional applications, Eidoscope was limited to editing movies up to a resolution of and did not support time codes.
Hardware
Graphical capabilities
The Archimedes machines (and their equivalents running RISC iX) used the VIDC1a video chip to provide a wide variety of screen resolutions, expanding on those available on the BBC Micro, including the following:
Since the video controller would not support display modes smaller than 20 KB, the lowest resolution modes were supported in the operating system by employing modes with twice the horizontal resolution and duplicating horizontally adjacent pixels.
The introduction of RISC OS brought support for a number of new display modes including the following:
The A540 and A5000 supported additional display modes:
High-resolution monochrome display modes were offered by the A440, A400/1 series and A540:
Apparent confusion about monochrome monitor support upon the launch of the Archimedes models led Acorn to clarify that the A400 series had "extra circuitry" offering two additional display modes "of up to 1280 by 976 in monochrome, and 160 columns by 122 lines of text, but only using a special monitor", this being connected using two BNC sockets (one for signal and one for sync).
The A540 (and corresponding R-series workstations) offered three BNC sockets, adding one for a separate horizontal sync connection for certain monitors. Acorn suggested the 19-inch Taxan Viking and Philips M19P114 monitors, with the former being offered in a bundle with the R140 workstation. The Taxan Viking R140 product bundled the existing Viking product with appropriate cabling and produced a "rock steady" 66 Hz mode 23 display, albeit with mouse pointer corruption at the extreme right of the screen due to "a bug in the VIDC chip".
The A5000 unlike its predecessor, the A540, did not support high resolution monochrome modes.
Graphics expansions
An expansion to speed up the VIDC chip in the Archimedes from 24 MHz to 36 MHz was announced by Atomwide in 1990, offering higher resolution display modes for machines connected to multisync monitors. Although resolutions up to and were supported, flicker due to a decreased refresh rate was reported as a problem, with appearing to be more comfortable in this regard. The SVGA resolution of was also supported in up to 16 colours. One side-effect of increasing the frequency of the VIDC was to also increase the frequency of generated sounds, since the VIDC was also responsible for sound generation. VIDC enhancers were supplied by some monitor vendors together with the appropriate cable for Archimedes machines, although fitting the device still required approved service work to be performed. Monitors such as the Taxan 795 Multivision were only usable in multisync modes without the VIDC enhancer whose accompanying software sought to "redefine all modes" to be compatible with the display as well as providing new modes.
One drawback of VIDC enhancer solutions was the increased memory bandwidth used by the VIDC at its newly elevated frequency, slowing down machines when using higher resolution modes, particularly machines with ARM2 processors and slower memory busses. Consequently, other solutions were adopted to work around the limitations of the built-in display hardware, notably "graphics enhancers" such as the PCATS graphics enhancer from The Serial Port, and "colour cards" such as Computer Concepts' ColourCard and State Machine's G8 which provided a separate framebuffer, holding a copy of the normal screen memory, for use in generating a video signal independently of the system's main memory. This permitted higher refresh rates (up to 70 Hz) even for higher resolution modes, although the maximum size of the screen memory imposed by the VIDC () also imposed a limit on available resolutions and colour depths, with being the highest resolution 256 colour mode that could be supported. However, such cards were also able to support more flexible palettes in 256 colour modes than the VIDC, and for lower resolutions, greater colour depths offering over 32,000 colours could be supported. The ColourCard was reported to allow an ARM2 system to use a display mode with 16 colours (occupying 480 KB) with an operating speed of "160% of the speed of the considerably lower resolution Acorn mode 28", this being with 256 colours (occupying 300 KB).
State Machine, founded by former hardware designers from Computer Concepts and Watford Electronics, announced a range of colour card peripherals, starting with the G8 and G8+ in late 1992, followed by the G8 Professional, these cards being demonstrated at the BBC Acorn User show in 1992, as was the Computer Concepts ColourCard. One potentially significant difference between the different product ranges was the role of the VIDC, with the ColourCard employing a "video switch" that permitted the VIDC to generate an output signal independent of the card for traditional display modes, with the card only generating output for enhanced modes, whereas the State Machine cards were entirely responsible for output and thereby provided emulations of the traditional modes, this leading to a "letter-box" effect for some modes in early versions of the State Machine software and also causing compatibility issues with software, particularly games, that accessed VIDC registers directly to configure the display. Subsequent developments from State Machine brought the G16 card, offering application-specific support for 15 and 16 bits per pixel modes.
Alongside bandwidth constraints, a fundamental limitation to the size of VIDC framebuffers was imposed by the memory controller, limiting the size of framebuffers transferred to the VIDC through DMA to a specific 512 KB physical memory region. State Machine's ColourBurst card, announced together with its G16 card, employed memory mapping techniques to provide 1 MB of video RAM instead of the 512 KB of earlier cards and thus supporting larger screen modes. The ColourBurst was, when reviewed in late 1993, the first 24-bit colour card available for the Archimedes, also supporting various upgrades including the "video switch" capability absent from earlier cards, PAL encoding, and other professional capabilities.
Coincidentally, ARM Limited announced the VIDC20 - the successor to the VIDC10 in the Archimedes - at around the same time as the introduction of the Computer Concepts and State Machine product ranges in late 1992. By late 1993, rumours about Acorn's next-generation system (eventually released as the Risc PC), particularly 24-bit colour support, led to suggestions of improved support for higher colour depths in RISC OS, accompanied by the observation in the context of State Machine's ColourBurst card that "it seems unlikely that another manufacturer will release such a powerful device before the launch of Acorn's new baby". In late 1993, Computer Concepts announced the ColourCard Gold, developed in conjunction with Acorn to offer 15 bits per pixel support in the desktop environment. Meanwhile, State Machine announced the ClusterCard for 33 MHz A5000 models, plugging into the memory controller socket and supporting upgrades to 8 MB of RAM alongside graphics enhancements offering 1 MB or 2 MB of video RAM. The ClusterCard, employing the G335 Cluster Module was reported to be the first graphics card for the Archimedes series not requiring the use of the VIDC.
With IBM PC compatible systems leaving the Archimedes "well behind the competition in the display stakes", the ClusterCard was seen as attempting a solution similar to a local bus architecture on the A5000, with the potential to "transform the A5000 into a serious graphics machine, with possibly as good a display potential as the next Acorn series equipped with VIDC20s". The launch of the Risc PC in 1994 demonstrated Acorn's successor to the Archimedes, to which State Machine responded with a product called ColourView, "an all-new replacement for the original G8 and G16 State Machine graphics cards", offering 16 bits per pixel desktop-compatible screen modes, with a modular version also available for the ClusterCard without the 1 MB framebuffer. The full version of the card was reportedly available for A300 series, A400 series, A5000 and A540 machines.
Somewhat distinct from general graphics enhancements, various products were also introduced to support the broadcasting industry and other professional imaging applications. In late 1990, Millipede Electronic Graphics announced an imaging product called APEX (Archimedes P3 Expansion) featuring "four P3 (pixel pipeline processor) chips, together with an Arm3 processor running at 27 MHz". With support for "broadcast quality graphics at 32 bits per pixel", hardware support for windows and sprites, emphasising real-time image combination and manipulation, the product was aimed at professional users and priced accordingly, with the version providing 4 MB of RAM projected to cost £2750. Nevertheless, a licensing agreement had been reached with Acorn to "enable Risc OS graphics functions to be fully emulated". Following up from this earlier product, Millipede offered an "all new Apex Imager" video card in early 1994 featuring the four custom chips, ARM3, FPA, and 16 MB of video RAM on a double-width podule costing £3975, this being virtually unchanged from the pricing of the original product from 1990. This product appears to make extensive use of FPGA devices and offers numerous video input and output facilities. Apex hardware was used by the Eidos video capture and compression solution, Thumper, which ran on a Risc PC and was able to process "MPEG 1 resolution video at full PAL frame rate in real time", being regarded in early 1995 as "the best digitiser for our needs on any platform" by Eidos' managing director. Previous Eidos capture solutions used A540 machines with 8 MB of RAM.
Sound and audio
The Archimedes was capable of producing eight-channel, 8-bit, stereo sound, with the video controller chip being responsible for sound generation, it having direct memory access capabilities to independently stream audio data to the output circuitry. Some users sought to bypass the audio filtering circuitry to improve sound from the external audio connector.
Floating-point arithmetic
The Archimedes did not provide hardware support for floating-point arithmetic as standard, but the system was designed so that one might be added, with a floating-point co-processor instruction set architecture having been defined by Acorn for programs to use. Accompanying this, a software module providing an emulation of such a co-processor, handling these additional instructions in software written using conventional ARM instructions. The co-processor was described as a "cut-down" ARM with only eight registers available instead of sixteen, offering instructions to transfer values to and from memory (supporting single, double, extended double and packed binary-coded decimal representations), to transfer values between the main CPU and co-processor, to transfer status information from the co-processor, to perform unary and binary operations on values, and to perform comparisons.
In the first generation of Archimedes 300 and 400 series machines, only the 400 series had the appropriate expansion capability to add a floating-point unit (FPU) or co-processor, although the emulator was supported on all models. The expansion capability was retained in the 400/1 series. The FPU expansion card was delivered for the R140 workstation and 400 series in 1989, priced at £599 plus VAT, and was based on the WE32206, with a "protocol converter chip" being used to translate between the ARM and the WE32206. The WE32206 card was also offered for Acorn's Springboard expansion card for IBM PC compatibles.
The Archimedes models based on the ARM3 processor supported a completely new "arithmetic co-processor" or "floating-point accelerator" known as the FPA. Released in 1993 for the R260 workstation and the A540 and A5000 machines, priced at £99 plus VAT, the FPA device—known specifically as the FPA10—was fitted in a dedicated socket on the processor card for the R260 and A540, or in a motherboard socket in the A5000. It offered a peak throughput of 5 MFLOPS at 26 MHz. The models officially supporting the FPA had been introduced some time prior to availability of the device, and various ARM3 upgrade cards for earlier models had also been made available with an FPA socket in anticipation of eventual availability. Fabrication of the device was performed by GEC Plessey Semiconductors and was reported to be in "an advanced stage of production" in early 1993. Availability remained unclear, with ARM releasing technical details indicating that the chip, at 134,000 transistors was reportedly "Arm's most complex IC to date" and comparing its performance to the MIPS R3010 floating-point co-processor, claiming a substantial power consumption advantage. Further details were given upon the eventual release of the FPA10, stating a 26 MHz operating frequency and a power consumption of 250 mW. Reception from major software producers such as Computer Concepts and Colton Software was cautious, with the former's products not making any use of floating-point instructions and thus not standing to benefit, and with the latter's using such instructions but indicating skepticism about any significant benefits in performance.
Observations from testing the FPA10 confirmed that applications such as Resultz and PipeDream 4—both Colton Software products—and other spreadsheets, whilst ostensibly standing to benefit as number processing applications, exhibited "no noticeable speed improvements", this being attributed to these applications' avoidance of unnecessary calculation and the more significant overhead of servicing a graphical user interface. Other programs such as Draw and ArtWorks—a Computer Concepts product—used their own arithmetic routines instead of the floating-point emulator (FPE) and, as anticipated, were therefore unable to take advantage of the accelerated floating-point instructions. However, various free of charge or low-cost programs ported from other systems, such as POV-Ray, plus selected native applications such as Clares' Illusionist and Oak Solutions' WorraCAD, did exhibit substantial performance gains from the FPA with speed-ups of between five and ten times. The Basic64 interpreter bundled with RISC OS which was "much slower than Basic V normally", with the former using the FPE and the latter providing its own floating-point arithmetic routines, ended up "slightly faster" due to observed speed-ups of around four to around eleven times, with non-trigonometric operations benefiting the most. Programs compiled by Intelligent Interfaces' Fortran compiler were reported as running "some routines up to 20 times faster with the FPA10". The product was perceived as "good value" but having restricted usefulness with the general lack of support in many applications, these employing their own routines and techniques to attempt to provide performant arithmetic on the base hardware platform, and a lack of incentive amongst software producers to offer support without a large enough market of users having the FPA fitted.
With the FPA10 having finally become available but only rated to run at 25 MHz, and with ARM3 upgrades being delivered at frequencies as high as 35 MHz, a higher-rated part, the FPA11, supporting 33 MHz operation was developed and apparently delivered in products such as a processor card upgrade for the A540. ARM3 upgrades were also produced with 33 MHz ARM3 processors, but unlike their 25 MHz counterparts which were available with FPA10 co-processors already fitted, these faster cards were not supplied with FPA11 co-processors, perhaps due to availability issues with the faster part.
ARM3 upgrades
In early 1990, Aleph One introduced an upgrade board for Archimedes A300 and A400 series models featuring the ARM3 processor which had been designed by Acorn but was sold independently by VLSI Technology. Although the ARM2 employed by current models could reportedly be run at 20 MHz, it was only ever run at 8 MHz due to external limitations, these being the speed of the data bus and of the "relatively slow", but correspondingly relatively inexpensive, RAM devices in use. The ARM3 incorporated a 4 KB on-chip combined instruction and data cache, loosening such external constraints and thus permitting the processor to be run productively at the elevated 20 MHz frequency. With a processor running at this higher speed, the overall performance of a computer with the ARM3 upgrade was reported as double that of the machine without the upgrade ("on average, execution times were halved"), with programs performing input/output benefiting rather less ("a worst case of 30 percent improvement"). Original A300 and A400 series models, as opposed to the A400/1 series, required an upgrade to MEMC1a. One hundred percent compatibility with the ARM2 was claimed, and a facility was provided to disable the on-chip cache and to slow the clock to 8 Mhz in order to handle software that ran too fast with the ARM3 running at full speed, but as originally provided, the ARM3 was not compatible with the existing hardware floating point co-processor solution due to the introduction of a different co-processor interface in the device, this interface eventually being used by the FPA device. The upgrade was introduced at a price of £684.24, with the MEMC1a costing £57.50 for those users who needed it.
By the end of 1991, an ARM3 upgrade had been offered for the A3000 by Aleph One in association with Atomwide and by Watford Electronics. Since the ARM2 was soldered directly to the motherboard in the A3000 using surface mounting techniques, the upgrade had to be performed by a fitting service, and prices included courier collection, fitting, testing and return within five working days. With the A5000 having been launched with a 25 MHz ARM3 fitted, these A3000 upgrade boards carried a processor running at this higher frequency relative to earlier upgrades. Originally, the Aleph One product had been priced at £468.83, but the announcement of a board by Watford Electronics led to a reduced price of £392.45. The Watford product had an introductory price of £274.95.
Other vendors produced ARM3 upgrades. In late 1992, Simtec Electronics announced a board with an additional socket for the FPA device, thus allowing older machines to join the A540 and A5000 in potentially taking advantage of it. By this time, prices for ARM3 upgrades had been reduced to the point that this Simtec upgrade cost only £175 plus VAT. Competitors including Ifel and CJE Micros followed Simtec's lead and announced similar combined ARM3/FPA upgrades. In contrast, Aleph One stated that the FPA would "not be available for a long time yet", indicating the pursuit of "a better solution based on the newer Arm600 chip plus an FPA". Other vendors had apparently ruled out similar ARM600-based products on the basis of cost. In 1993, Ifel later announced a 35 MHz ARM3 upgrade based on a limited quantity - approximately 1500 - of available suitably rated parts, these having a ceramic package whose volume ruled out its use in machines with limited internal space, making the upgrade suitable for A300, A400 or R140 machines. A combined ARM3/FPA upgrade with the faster ARM3 was under consideration, although the lack of suitably rated FPA chips meant that a switch would be provided to manually change the clock frequency between 25 MHz and 35 MHz. A target price of £199 including VAT was estimated.
Prior to the availability of the FPA, Simtec reduced the price of its combined ARM3/FPA board to £165 plus VAT. The company also released a "turbo RAM" upgrade for ARM250-based machines to provide similar performance benefits to an ARM3 upgrade, replacing the RAM with a faster type that then permitted the processor to be run at a higher frequency, thus pursuing the alternative approach to enhancing system performance (increasing both the processor and memory speed) to that pursued by ARM3 upgrades (introducing a faster processor with a cache). With the upgrade, performance of these machines was reported as increasing from 7 MIPS to 10 MIPS, this compared to almost 13 MIPS for a 25 MHz ARM3. By employing a 16 MHz clock signal, as envisaged by Acorn in the design of the A3010, in conjunction with dynamic RAM devices with a 70 ns access time, the upgrade provided a total of 4 MB of RAM and a 40 percent performance improvement. Unlike standard RAM upgrades, the turbo upgrade needed to be fitted at a suitable facility, and the board was priced slightly higher than a standard RAM upgrade at £129 plus VAT. A "super turbo" version of the board with 20 MHz crystal and 45 ns dynamic RAM devices was reviewed and apparently available subject to component availability, reportedly achieving 12.25 MIPS.
Aleph One, having founded the ARM3 upgrade industry, found that increased competition from "six or eight companies making Arm3 upgrades" drove down prices to the point that "margins fell, and the bottom fell out of the Arm3 market". However, revenues from ARM3 upgrades allowed Aleph One to pursue the development of IBM PC-compatible podule expansions and eventually the PC processor card for the Risc PC, these having "a higher intellectual content than Arm3 upgrades" and being more difficult for potential competitors to make. Plans were indicated to develop a PowerPC processor card for the Risc PC. Neither the PowerPC upgrade for the Risc PC nor the earlier ARM600-based upgrade for the Archimedes series appeared, with Acorn itself abandoning plans to combine newer ARM600 or ARM700 parts with FPA devices to provide improved floating point performance.
ARM3 upgrades were produced for several years, but with the ARM3 part being "officially discontinued" by its manufacturer VLSI in 1996, upgrade vendors such as IFEL were predicting scarcity and unable to guarantee further supplies of such products. Demand for such upgrades, even in 1996, was reported as "steady" with schools still upgrading "batches of old A300 and A400 machines". Later still, in 1997, Simtec announced a "special batch" of ARM3 upgrades for A300 and A400 series machines and the A3000, featuring a socket for the 25 MHz FPA10 or 33 MHz FPA11, with the former being supplied already fitted for a total product cost of £199 plus VAT.
IBM PC-compatible podules
Acorn initially planned to produce an IBM PC-compatible system on a podule (peripheral module), complete with 80186 processor (running at 10 MHz) and disk drive support. Subsequent pricing and competitiveness considerations led to the product being shelved. However, in late 1991, hardware supplier Aleph One announced a PC podule based on a 20 MHz Intel 80386SX processor with VGA display capability. Launched in early 1992, the podule fitted with 1 MB of RAM cost £595, whereas a 4 MB version cost £725. Known as the 386PC, the expansion was "in effect, a PC within your Archimedes" whose RAM could be upgraded from the minimum of 1 MB, the price of this configuration having fallen to £495 at the time of its review, to the maximum of 4 MB, with this configuration also being offered at a reduced price of £625. A socket on the board permitted the 80387 maths co-processor to be fitted for hardware floating point arithmetic support, this costing an extra £120. Integration of the PC system involved the Archimedes providing display, keyboard and disk support. In the initial version, the supplied 386PC application would put the Archimedes into dedicated display mode and thus take over the display, but subsequent versions promised operation of the PC in a window, much like the updated PC Emulator from the era. Screen memory requirements were around 256 KB for MDA and CGA, with EGA and VGA requiring another 256 KB. Separate serial and parallel ports were fitted on the expansion board due to limitations with the ports on existing Archimedes machines, but integration with those ports was also planned for subsequent versions of the product.
In late 1992, Aleph One reduced the price of the 386-based card by £100, also upgrading the processor to a 25 MHz part, and introduced a card featuring a 25 MHz Cyrix 486SLC processor, with the new card retaining the maths co-processor option of the earlier product. The stated performance of this new card was approximately twice that of the 386-based card but only "40 percent of the performance of a standard 33 MHz 486DX PC clone". However, upgraded Windows drivers reportedly allowed even the 386-based card to exceed the graphical performance of such a 486-based clone, effectively employing the host Archimedes as a kind of "Windows accelerator". A subsequent review moderated such claims somewhat, indicating a Windows performance "not noticably better than an average un-accelerated 386SX PC clone", although acceleration support was expected to improve, with device drivers for various direct drive laser printers also expected. The product was priced at £495 for the 1 MB version and £595 for the 4 MB version, with a future revision of the product anticipated that would support up to 16 MB of RAM.
In 1993, Aleph One collaborated with Acorn to produce Acorn-branded versions of the PC cards for use with the A3020 and A4000 which used a distinct "mini-podule expansion system". The 25 MHz 386SX and 486SLC cards were offered in this profile to provide DOS and Windows compatibility, branded as the PC386 and PC486, priced at £275 and £499 respectively. In late 1993, the supplied software was upgraded and discounts to the products announced, bringing the respective prices down to £225 and £425. Acorn also offered bundles of the A4000 with a hard drive and each of the cards. Coincidentally at this time, with speculation building about future Acorn computer products, Acorn's product marketing manager had been reported as suggesting that such products "would have an empty Intel socket for customers to add PC Dos and Windows compatibility". Such remarks were clarified by Acorn's technical director, indicating that an Intel "second processor" was merely an option in an architecture supporting multiple processors. Ultimately, Acorn would release the Risc PC with dual processor capabilities and support for using a "low cost (£99 upwards) plug-in 486 PC processor or other CPUs" alongside an ARM processor.
Redesigned PC cards were released in 1994, introducing the option of a faster 50 MHz 486SLC2 processor for a reported doubling of the performance over the fastest existing cards. Up to 16 MB of SIMM-profile RAM could be fitted, and a local hard drive controller was added. The supplied software was also upgraded to support Windows in a resolution of at up to 16 colours, and optional network driver support was available to use the card as a Novell NetWare client and for Windows for Workgroups 3.11. Pricing remained similar to earlier models. Reported performance was better than the previous generation of cards but "still slow compared to all but the most basic of modern PCs, but certainly usable". The Windows User benchmarks rated the performance as similar to a fast 386SX-based system or a "standard" 386DX-based system, with the faster processor yielding a more favourable rating, but with the hard drive and graphics tests bringing the overall rating down. Use of a hard drive fitted directly to the card, using its own dedicated IDE interface, was reported as providing up to ten times the level of hard drive performance relative to using the system's own drive, but use of the SmartDrive caching software made any resulting performance difference marginal.
Parallel and data processing
A range of podules providing access to parallel processing capabilities using Inmos Transputer processors were announced by Gnome Computing in late 1989. Aside from a "Link Adaptor" podule for interfacing to external Transputer hardware, the "TRAM Motherboard" podule combined the Link Adaptor's interfacing logic with the hosting of up to four "TRAMs" (Transputer plus RAM modules), providing a complete development system based on the Archimedes. Also offered was a "Transputer Baseboard" podule featuring a T425 or T800 with up to 8 MB of RAM. A single podule with four TRAMs, each employing a T800 processor, was stated as giving 40 MIPS of performance, with a hypothetical 160 MIPS available on an Archimedes with four podule slots.
Digital signal processing capabilities were provided by the Burden Neuroscience 56001 DSP Card, originally developed by the Burden Neurological Institute as in-house hardware for use in conjunction with Archimedes systems but marketed by The Serial Port. This card was fitted as a single-width podule but, unusually, needed manual configuration instead of identifying itself to the host computer. The podule itself offered a 32 MHz Motorola 56001 digital signal processor together with 192 KB of RAM, two 16-bit analogue-to-digital converters, two 16-bit digital-to-analogue converters, and serial communications capabilities. A 25-pin connector provided the means to interface the board to other hardware. An assembler was provided, although this reportedly required Acorn's Desktop Development Environment to function, and software was also provided to interact with the board, view memory and register contents, and to visualise memory ranges in real time. Described as appropriate for "high speed analogue data acquisition or output" supporting real time signal processing, the product was considered "a useful 56001 development test bed", requiring a certain level of expertise, but was also considered good value at a price of £449 plus VAT.
CD-ROM and related storage
CD-ROM technology was introduced to the Archimedes range in 1990 with the launch of Next Technology's CD-ROM solution for the A3000 and earlier Archimedes models. Combining an SCSI interface and CD-ROM drive and supplied with a sample disc for a total price of £995, the solution provided a filing system so that standard CD-ROM media could be browsed and read like any other kind of disc. An application was also provided to play audio tracks on CD Audio and mixed-format discs through the drive's headphone socket. The drive itself used a caddy to hold the discs inserted into the drive. One limitation experienced on RISC OS was with the content on various CD-ROM titles, this often being designed for MS-DOS and featuring DOS-only software to offer search and database-related functionality. Next Technology aimed to remedy this situation by offering a service to let users create their own CD-ROMs at around £300 per disc, leading to the initial conclusion that schools and institutional users would benefit from the format much more than home users.
Two years on from the introduction of CD-ROM products, adoption of the technology was still at a "tentative state", with £8 million having been spent on equipment and an estimated 3,000 drives deployed in UK schools. Drive prices had fallen significantly, from around £1,000 to £300 and with a further decline to £200 anticipated. As a significant technology in the delivery of multimedia content, the focus had shifted from merely using CD-ROM as a cheap storage medium for large amounts of graphics and text to aspirations of providing "high-quality, full-screen graphics coupled with hi-fi stereo sound" on CD media, with the principal challenge identified as being able to deliver compressed video that either a computer or a drive could decompress without compromising video quality or introducing incompatibilities between different manufacturers' products.
Acorn's video solution for its own computers was the Replay system, introducing compression formats and associated software for playback and authoring. However, laserdisc technology, which had been used several years earlier by Acorn for interactive video applications, notably in the BBC Domesday Project, was still seen as being a "promising rival" to CD-based video formats, having finally "become successful in multimedia training" and by then "being aimed at well-heeled home video enthusiasts". Reservations about the read-only nature of CD-ROM discs was also seen as a "wounding flaw", leaving users to consider alternatives for convenient bulk storage, with magneto-optical drives emerging at this time. Nevertheless, CD-ROM adoption was seen as inevitable, particularly given the format's benefits for holding large amounts of text and making the searching of such text convenient, and with government initiatives having helped to make an estimated 100 titles available for both MS-DOS and RISC OS. The dual-function nature of the media and the ability to use drives to play audio also made such products generally attractive purchases, particularly for home users and with Photo CD also regarded as an attraction, although the introduction of Philips' CD-i and Commodore's CDTV risked a level of confusion in this market as well as presenting another challenge in terms of compatibility for Acorn's own products and technologies.
Acorn would go on to announce Photo CD support in its products in early 1993, with operating system and application enhancements being delivered by the end of that year. Although the video and memory capabilities of the Archimedes machines were generally unable to take advantage of the higher colour depths or the largest sizes of the scanned images on Photo CD media, the introduction of future hardware from Acorn, featuring the next generation of video controller from ARM and supporting 24-bit colour displays, was anticipated. Support for multi-session CD-ROMs entailed some upgrades to existing SCSI interfaces as well as the use of drives with the appropriate capabilities such as Acorn's own Multimedia Expansion Unit.
List of models
Also produced, but never sold commercially were:
A500: 4 MB RAM, ST506 interface, Archimedes development machine
A680 and M4: 8 MB RAM, SCSI on motherboard, RISC iX development machines
Impact
A mid-1987 Personal Computer World preview of the Archimedes based on the "A500 Development System" expressed enthusiasm about the computer's performance, that it "felt like the fastest computer I have ever used, by a considerable margin", indicating that the system deserved success in the education market and might have more success than Acorn's earlier models in the business market, comparing favourably to the Mac II or IBM PS/2 80. Similar enthusiasm was reflected by the same writer in a Byte magazine preview of the A310 the following month. However, dissatisfaction with the availability of essential applications, such as the lack of a word processor specifically written for the system at its launch, and the incoherent user experience presented by early applications, highlighted perceived deficiencies with the product from the perspective of users and potential users.
With the imminent arrival of RISC OS for the Archimedes, later coverage around the start of 1989 praised the desktop and supplied applications, noting that "RISC OS is everything the Archimedes' original Desktop should have been but wasn't", and looked forward to future applications from Acorn and third parties, only lamenting that it was "a shame that this impressive environment was not in place at the Archimedes' launch, but it's still not too late for it to turn some heads".
By early 1991, 100,000 Archimedes machines had been sold, with the A3000 being the largest selling computer in UK schools, with Acorn's Archimedes and Master 128 accounting for 53% of sales in an eight-month period during 1990, and with the 32-bit machines "outselling the Master 128 by a factor of two to one". By mid-1992, a reported 180,000 Archimedes machines had been sold, again due to strong A3000 sales. By 1994 and the launch of the Risc PC, over 300,000 Archimedes machines had been sold, and by the launch of the StrongARM J233 variant of the Risc PC in 1997, over 600,000 Archimedes, A-series and Risc PC systems had been sold.
Performance
The Archimedes was one of the most powerful home computers available during the late 1980s and early 1990s, with its CPU outperforming the Motorola 68000 found in both the cheaper Amiga 500 and Atari ST machines as well as the more expensive Macintosh and Amiga 2000. Although an 68000 has a performance rating of around , the 68000-based Amiga 1000 reportedly achieved around when benchmarked as a system. In comparison, systems based on the ARM2, such as the BBC A3000, produced Dhrystone benchmark results ranging from 4728 () up to 5972 (), depending on the operating system version and display configuration.
(A VAX 11/780 running VMS 4.2 produced the baseline Dhrystone result of 1757.)
An Amiga 2000 running AmigaOS would reportedly have benefited from an upgrade to a 68030 processor (effectively becoming an Amiga 2500), thus increasing its performance rating to as high as , but Acorn's low-end A3010 with an ARM250 processor was capable of Dhrystone benchmark results ranging from 5500 () up to 8871 (). An Archimedes system such as the A410/1 upgraded to use a ARM3 could achieve a Dhrystone benchmark result of 18367 (), with the ARM3-based A5000 achieving a reported , rising to in its variant. ARM3 upgrades were initially rather expensive but decreased significantly in price and were available for all ARM2 systems, even the relatively inexpensive A3000. Acorn's ARM3-based machines were generally priced for business or institutional users, however.
Only the Amiga 4000 with 68040 CPU (or suitably upgraded Amiga 2000) would exceed such figures, with reported, thereby being comparable to Acorn's Risc PC 600 ( to .) With development of ARM technologies having been transferred to ARM Limited as a separate company, the performance advantages of Acorn's ARM-based computers, maintained by the transition from the ARM2 to ARM3, eroded somewhat in the early 1990s relative to competitors using processors from established vendors such as Intel and Motorola, as new ARM processors belatedly arrived offering more modest performance gains over their predecessors. With ARM Limited focusing on embedded applications, it was noted that "the large performance lead Arm2 and Arm3 once enjoyed" over contemporary Intel processors was over, at least for the time being.
Education
The range won significant market share in the education markets of the UK, Ireland, Australia and New Zealand. Acorn's considerable presence in primary and secondary education had been established through the Archimedes' predecessors – the BBC Micro and BBC Master – with the Archimedes supplementing these earlier models to see Acorn's products collectively representing over half of the installed computers in secondary schools at the start of the 1990s. The Archimedes range was available in the US and Canada via Olivetti Canada.
In 1992, the Tesco supermarket chain initiated its Computers for Schools scheme in association with Acorn, offering vouchers for every £25 spent in Tesco stores that were redeemable against software and hardware products including complete computer systems, with this promotional campaign taking place over a six week period. Over 15,000 schools registered to participate in the scheme and over 22 million vouchers were issued during the campaign period, placing the estimated value of the distributed products at over , although the actual value of distributed products was later reported as . Tesco and Acorn repeated the scheme in 1993 on the basis of the response to the previous year's campaign, distributing software and hardware at an estimated value of to over 11,000 schools including 7,000 computers, and even introducing Acorn computers to some schools for the first time.
Despite the benefit to Acorn of expanding its customer base, dissatisfaction was expressed by dealers and software companies about the effects of the scheme, with anecdotes emerging of a reluctance to buy equipment that could be obtained for free, thus harming dealer revenues, although Acorn's education marketing manager argued that the scheme's effect was generally positive and actually produced sales opportunities for dealers. The inclusion of software products in the scheme was regarded by one commentator as harmful to both the companies whose products were featured, these "not making enough profit from the transaction", and to those whose products were not, these seeing potential customers choose their competitors' "free" products. Noting that the scheme was "not purely philanthropic", concern was expressed about the effect on the Acorn market and that schools were needing to "resort to charities and publicity stunts to get the basic tools to do the job". In response to such criticism, independent software titles were dropped from the scheme in 1994, which ultimately distributed products to over 10,000 schools including 4,000 computers, with a total of 15,000 computers having been given away over the first three years of the scheme.
With Tesco having expanded its presence in Scotland through acquisitions, the Tesco scheme was extended to Scotland for the first time in 1995. Alongside updates to the featured product selection, the possibility was introduced of saving unredeemed vouchers for redemption in the 1996 campaign. By the end of the 1996 campaign, worth of products had been distributed, with the scheme having distributed products worth a total of , including 26,000 Acorn computers in its first five years.
By the mid to late 1990s, the UK educational market began to turn away from Acorn's products towards IBM PC compatibles, with Acorn and Apple establishing a joint venture, Xemplar, to market these companies' products in the education sector as part of a strategy to uphold their market share. Through Xemplar's involvement in the Computers for Schools scheme, Apple products were featured for the first time in the 1996 campaign. Xemplar's involvement continued in subsequent years, introducing information technology training for teachers in 1998, and seeking to offer Acorn products in the 1999 campaign despite the turmoil around Acorn as the company sought to move away from the desktop computing market, subsequently selling its stake in Xemplar to Apple. In 2000, Tesco changed its partner in the Computers for Schools scheme from Xemplar to RM plc.
Acorn conducted other promotional initiatives towards the education sector. The Acorn Advantage programme, launched in September 1994, offered a loyalty scheme whereby points were accrued through purchases and redeemed for "curriculum resources" that included non-computing items such as musical and scientific instruments as well as computer hardware. Several commercial partners were involved in the scheme such as Fina, which awarded vouchers with petrol purchases that could be exchanged for points, and the Midland Bank which would donate points to schools joining its Midbank school-based banking system. An Acorn-branded Visa credit card would also generate Advantage points for nominated schools.
Legacy
Between 1994 and 2008 a model superseding the Archimedes computer, the Risc PC, was used in television for broadcast automation, programmed by the UK company Omnibus Systems. Original desktop models and custom made 19-inch rack models were used to control/automate multiple television broadcast devices from other manufacturers in a way that was unusual at the time. It was used at several large European television stations including the BBC, NRK, TMF (NL, UK).
Also between 1994 and 2004 the Archimedes and Risc PC models were used for teleprompters at television studios. The hardware was easy to adapt for TV broadcast use and cheaper than other hardware available at the time.
See also
The Fourth Dimension (company)
RISC OS character set
:Category:Acorn Archimedes games
References
Notes
External links
The Ultimate Acorn Archimedes talk by Matt Evans
Acorn systems page at Old-Computers.com
Acorn Archimedes at Flatbatteries
Chris's Acorns: Archimedes
Acorn Computers
ARM-based home computers
Home computers
Computers designed in the United Kingdom
Personal computers
ARM architecture
32-bit computers
|
311632
|
https://en.wikipedia.org/wiki/Video%20game%20programmer
|
Video game programmer
|
A game programmer is a software engineer, programmer, or computer scientist who primarily develops codebases for video games or related software, such as game development tools. Game programming has many specialized disciplines, all of which fall under the umbrella term of "game programmer". A game programmer should not be confused with a game designer, who works on game design.
History
In the early days of video games (from the early 1970s to mid-1980s), a game programmer also took on the job of a designer and artist. This was generally because the abilities of early computers were so limited that having specialized personnel for each function was unnecessary. Game concepts were generally light and games were only meant to be played for a few minutes at a time, but more importantly, art content and variations in gameplay were constrained by computers' limited power.
Later, as specialized arcade hardware and home systems became more powerful, game developers could develop deeper storylines and could include such features as high-resolution and full color graphics, physics, advanced artificial intelligence and digital sound. Technology has advanced to such a great degree that contemporary games usually boast 3D graphics and full motion video using assets developed by professional graphic artists. Nowadays, the derogatory term "programmer art" has come to imply the kind of bright colors and blocky design that were typical of early video games.
The desire for adding more depth and assets to games necessitated a division of labor. Initially, art production was relegated to full-time artists. Next game programming became a separate discipline from game design. Now, only some games, such as the puzzle game Bejeweled, are simple enough to require just one full-time programmer. Despite this division, however, most game developers (artists, programmers and even producers) have some say in the final design of contemporary games.
Disciplines
A contemporary video game may include advanced physics, artificial intelligence, 3D graphics, digitised sound, an original musical score, complex strategy and may use several input devices (such as mice, keyboards, gamepads and joysticks) and may be playable against other people via the Internet or over a LAN. Each aspect of the game can consume all of one programmer's time and, in many cases, several programmers. Some programmers may specialize in one area of game programming, but many are familiar with several aspects. The number of programmers needed for each feature depends somewhat on programmers' skills, but mostly are dictated by the type of game being developed.
Game engine programmer
Game engine programmers create the base engine of the game, including the simulated physics and graphics disciplines. Increasingly, video games use existing game engines, either commercial, open source or free. They are often customized for a particular game, and these programmers handle these modifications.
Physics engine programmer
A game's physics programmer is dedicated to developing the physics a game will employ. Typically, a game will only simulate a few aspects of real-world physics. For example, a space game may need simulated gravity, but would not have any need for simulating water viscosity.
Since processing cycles are always at a premium, physics programmers may employ "shortcuts" that are computationally inexpensive, but look and act "good enough" for the game in question. In other cases, unrealistic physics are employed to allow easier gameplay or for dramatic effect. Sometimes, a specific subset of situations is specified and the physical outcome of such situations are stored in a record of some sort and are never computed at runtime at all.
Some physics programmers may even delve into the difficult tasks of inverse kinematics and other motions attributed to game characters, but increasingly these motions are assigned via motion capture libraries so as not to overload the CPU with complex calculations.
Graphics engine programmer
Historically, this title usually belonged to a programmer who developed specialized blitter algorithms and clever optimizations for 2D graphics. Today, however, it is almost exclusively applied to programmers who specialize in developing and modifying complex 3D graphic renderers. Some 2D graphics skills have just recently become useful again, though, for developing games for the new generation of cell phones and handheld game consoles.
A 3D graphics programmer must have a firm grasp of advanced mathematical concepts such as vector and matrix math, quaternions and linear algebra.
Skilled programmers specializing in this area of game development can demand high wages and are usually a scarce commodity. Their skills can be used for video games on any platform.
Artificial intelligence programmer
An AI programmer develops the logic of time to simulate intelligence in enemies and opponents. It has recently evolved into a specialized discipline, as these tasks used to be implemented by programmers who specialized in other areas. An AI programmer may program pathfinding, strategy and enemy tactic systems. This is one of the most challenging aspects of game programming and its sophistication is developing rapidly. Contemporary games dedicate approximately 10 to 20 percent of their programming staff to AI.
Some games, such as strategy games like Civilization III or role-playing video games such as The Elder Scrolls IV: Oblivion, use AI heavily, while others, such as puzzle games, use it sparingly or not at all. Many game developers have created entire languages that can be used to program their own AI for games via scripts. These languages are typically less technical than the language used to implement the game, and will often be used by the game or level designers to implement the world of the game. Many studios also make their games' scripting available to players, and it is often used extensively by third party mod developers.
The AI technology used in games programming should not be confused with academic AI programming and research. Although both areas do borrow from each other, they are usually considered distinct disciplines, though there are exceptions. For example, the 2001 game by Lionhead Studios Black & White features a unique AI approach to a user controlled creature who uses learning to model behaviors during game-play. In recent years, more effort has been directed towards intervening promising fields of AI research and game AI programming.
Sound programmer
Not always a separate discipline, sound programming has been a mainstay of game programming since the days of Pong. Most games make use of audio, and many have a full musical score. Computer audio games eschew graphics altogether and use sound as their primary feedback mechanism.
Many games use advanced techniques such as 3D positional sound, making audio programming a non-trivial matter. With these games, one or two programmers may dedicate all their time to building and refining the game's sound engine, and sound programmers may be trained or have a formal background in digital signal processing.
Scripting tools are often created or maintained by sound programmers for use by sound designers. These tools allow designers to associate sounds with characters, actions, objects and events while also assigning music or atmospheric sounds for game environments (levels or areas) and setting environmental variables such as reverberation.
Gameplay programmer
Though all programmers add to the content and experience that a game provides, a gameplay programmer focuses more on a game's strategy, implementation of the game's mechanics and logic, and the "feel" of a game. This is usually not a separate discipline, as what this programmer does usually differs from game to game, and they will inevitably be involved with more specialized areas of the game's development such as graphics or sound.
This programmer may implement strategy tables, tweak input code, or adjust other factors that alter the game. Many of these aspects may be altered by programmers who specialize in these areas, however (for example, strategy tables may be implemented by AI programmers).
Scripter
In early video games, gameplay programmers would write code to create all the content in the game—if the player was supposed to shoot a particular enemy, and a red key was supposed to appear along with some text on the screen, then this functionality was all written as part of the core program in C or assembly language by a gameplay programmer.
More often today the core game engine is usually separated from gameplay programming. This has several development advantages. The game engine deals with graphics rendering, sound, physics and so on while a scripting language deals with things like cinematic events, enemy behavior and game objectives. Large game projects can have a team of scripters to implement these sorts of game content.
Scripters usually are also game designers. It is often easier to find a qualified game designer who can be taught a script language as opposed to finding a qualified game designer who has mastered C++.
UI programmer
This programmer specializes in programming user interfaces (UIs) for games. Though some games have custom user interfaces, this programmer is more likely to develop a library that can be used across multiple projects. Most UIs look 2D, though contemporary UIs usually use the same 3D technology as the rest of the game so some knowledge of 3D math and systems is helpful for this role. Advanced UI systems may allow scripting and special effects, such as transparency, animation or particle effects for the controls.
Input programmer
Input programming, while usually not a job title, or even a full-time position on a particular game project, is still an important task. This programmer writes the code specifying how input devices such as a keyboard, mouse or joystick affect the game. These routines are typically developed early in production and are continually tweaked during development. Normally, one programmer does not need to dedicate his entire time to developing these systems. A real-time motion-controlled game utilizing devices such as the Wii Remote or Kinect may need a very complex and low latency input system, while the HID requirements of a mouse-driven turn-based strategy game such as Heroes of Might and Magic are significantly simpler to implement.
Network programmer
This programmer writes code that allows players to compete or cooperate, connected via a LAN or the Internet (or in rarer cases, directly connected via modem). Programmers implementing these game features can spend all their time in this one role, which is often considered one of the most technically challenging. Network latency, packet compression, and dropped or interrupted connections are just a few of the concerns one must consider. Although multi-player features can consume the entire production timeline and require the other engine systems to be designed with networking in mind, network systems are often put off until the last few months of development, adding additional difficulties to this role. Some titles have had their online features (often considered lower priority than the core gameplay) cut months away from release due to concerns such as lack of management, design forethought, or scalability. Virtua Fighter 5 for the PS3 is a notable example of this trend.
Game tools programmer
The tools programmer can assist the development of a game by writing custom tools for it. Game development Tools often contain features such as script compilation, importing or converting art assets, and level editing. While some tools used may be COTS products such as an IDE or a graphics editor, tools programmers create tools with specific functions tailored to a specific game which are not available in commercial products. For example, an adventure game developer might need an editor for branching story dialogs, and a sport game developer could use a proprietary editor to manage players and team stats. These tools are usually not available to the consumers who buy the game.
Porting programmer
Porting a game from one platform to another has always been an important activity for game developers. Some programmers specialize in this activity, converting code from one operating system to work on another. Sometimes, the programmer is responsible for making the application work not for just one operating system, but on a variety of devices, such as mobile phones. Often, however, "porting" can involve re-writing the entire game from scratch as proprietary languages, tools or hardware make converting source code a fruitless endeavour.
This programmer must be familiar with both the original and target operating systems and languages (for example, converting a game originally written in C++ to Java), convert assets, such as artwork and sounds or rewrite code for low memory phones. This programmer may also have to side-step buggy language implementations, some with little documentation, refactor code, oversee multiple branches of code, rewrite code to scale for wide variety of screen sizes and implement special operator guidelines. They may also have to fix bugs that were not discovered in the original release of a game.
Technology programmer
The technology programmer is more likely to be found in larger development studios with specific departments dedicated solely to R&D. Unlike other members of the programming team, the technology programmer usually isn't tied to a specific project or type of development for an extended length of time, and they will typically report directly to a CTO or department head rather than a game producer. As the job title implies, this position is extremely demanding from a technical perspective and requires intimate knowledge of the target platform hardware. Tasks cover a broad range of subjects including the practical implementation of algorithms described in research papers, very low-level assembly optimization and the ability to solve challenging issues pertaining to memory requirements and caching issues during the latter stages of a project. There is considerable amount of cross-over between this position and some of the others, particularly the graphics programmer.
Generalist
In smaller teams, one or more programmers will often be described as 'Generalists' who will take on the various other roles as needed. Generalists are often engaged in the task of tracking down bugs and determining which subsystem expertise is required to fix them.
Lead game programmer
The lead programmer is ultimately in charge of all programming for the game. It is their job to make sure the various submodules of the game are being implemented properly and to keep track of development from a programming standpoint. A person in this role usually transitions from other aspects of game programming to this role after several years of experience. Despite the title, this person usually has less time for writing code than other programmers on the project as they are required to attend meetings and interface with the client or other leads on the game. However, the lead programmer is still expected to program at least some of the time and is also expected to be knowledgeable in most technical areas of the game. There is often considerable common ground in the role of technical director and lead programmer, such that the jobs are often covered by one person.
Platforms
Game programmers can specialize on one platform or another, such as the Wii U or Windows. So, in addition to specializing in one game programming discipline, a programmer may also specialize in development on a certain platform. Therefore, one game programmer's title might be "PlayStation 3 3D Graphics Programmer." Some disciplines, such as AI, are transferable to various platforms and needn't be tailored to one system or another. Also, general game development principles such as 3D graphics programming concepts, sound engineering and user interface design are transferable between platforms.
Education
Notably, there are many game programmers with no formal education in the subject, having started out as hobbyists and doing a great deal of programming on their own, for fun, and eventually succeeding because of their aptitude and homegrown experience. However, most job solicitations for game programmers specify a bachelor's degree (in mathematics, physics, computer science, "or equivalent experience").
Increasingly, universities are starting to offer courses and degrees in game programming. Any such degrees have considerable overlap with computer science and software engineering degrees.
Salary
Salaries for game programmers vary from company to company and country to country. In general, however, pay for game programming is generally about the same for comparable jobs in the business sector. This is despite the fact that game programming is some of the most difficult of any type and usually requires longer hours than mainstream programming.
Results of a 2010 survey in the United States indicate that the average salary for a game programmer is USD$95,300 annually. The least experienced programmers, with less than 3 years of experience, make an average annual salary of over $72,000. The most experienced programmers, with more than 6 years of experience, make an average annual salary of over $124,000.
Generally, lead programmers are the most well compensated, though some 3D graphics programmers may challenge or surpass their salaries. According to the same survey above, lead programmers on average earn $127,900 annually.
Job security
Though sales of video games rival other forms of entertainment such as movies, the video game industry is extremely volatile. Game programmers are not insulated from this instability as their employers experience financial difficulty.
Third-party developers, the most common type of video game developers, depend upon a steady influx of funds from the video game publisher. If a milestone or deadline is not met (or for a host of other reasons, like the game is cancelled), funds may become short and the developer may be forced to retrench employees or declare bankruptcy and go out of business. Game programmers who work for large publishers are somewhat insulated from these circumstances, but even the large game publishers can go out of business (as when Hasbro Interactive was sold to Infogrames and several projects were cancelled; or when The 3DO Company went bankrupt in 2003 and ceased all operations). Some game programmers' resumes consist of short stints lasting no more than a year as they are forced to leap from one doomed studio to another. This is why some prefer to consult and are therefore somewhat shielded from the effects of the fates of individual studios.
Languages and tools
Most commercial computer and video games are written primarily in C++, C, and some assembly language. Many games, especially those with complex interactive gameplay mechanics, tax hardware to its limit. As such, highly optimized code is required for these games to run at an acceptable frame rate. Because of this, compiled code is typically used for performance-critical components, such as visual rendering and physics calculations. Almost all PC games also use either the DirectX, OpenGL APIs or some wrapper library to interface with hardware devices.
Various script languages, like Ruby, Lua and Python, are also used for the generation of content such as gameplay and especially AI. Scripts are generally parsed at load time (when the game or level is loaded into main memory) and then executed at runtime (via logic branches or other such mechanisms). They are generally not executed by an interpreter, which would result in much slower execution. Scripts tend to be used selectively, often for AI and high-level game logic. Some games are designed with high dependency on scripts and some scripts are compiled to binary format before game execution. In the optimization phase of development, some script functions will often be rewritten in a compiled language.
Java is used for many web browser based games because it is cross-platform, does not usually require installation by the user, and poses fewer security risks, compared to a downloaded executable program. Java is also a popular language for mobile phone based games. Adobe Flash, which uses the ActionScript language, and JavaScript are popular development tools for browser-based games.
As games have grown in size and complexity, middleware is becoming increasingly popular within the industry. Middleware provides greater and higher level functionality and larger feature sets than the standard lower level APIs such as DirectX and OpenGL, such as skeletal animation. In addition to providing more complex technologies, some middleware also makes reasonable attempts to be platform independent, making common conversions from, for example, Microsoft Windows to PS4 much easier. Essentially, middleware is aimed at cutting out as much of the redundancy in the development cycle as possible (for example, writing new animation systems for each game a studio produces), allowing programmers to focus on new content.
Other tools are also essential to game developers: 2D and 3D packages (for example Blender, GIMP, Photoshop, Maya or 3D Studio Max) enable programmers to view and modify assets generated by artists or other production personnel. Source control systems keep source code safe, secure and optimize merging. IDEs with debuggers (such as Visual Studio) make writing code and tracking down bugs a less painful experience.
See also
List of video game industry people#Programming
Code Monkeys, an animated show about game programmers
Programmer
Game design
Game development tool
Game programming#Tools
Notes
References
External links
Game industry veteran Tom Sloper's advice on game programming
The Programmer at Eurocom
01
Computer occupations
Game Programmer
|
24432729
|
https://en.wikipedia.org/wiki/Faculty%20for%20Information%20Technology%2C%20Podgorica
|
Faculty for Information Technology, Podgorica
|
The Faculty of Information Technology is a modern academic institution educating professionals in the field of information technology.
Its objective is to provide personnel in the field of information technology for economy, national services and financial institutions. The syllabus is based on the model of European faculties of information science. The curriculum and syllabus, as well as the complete teaching process, are compliant with the principles of the Bologna Declaration. Programs of major software companies are included through syllabus providing students with topicality of knowledge acquired during studies.
One study program with three courses is studied at the Faculty of Information Technology:
Information systems
Software engineering
Computer networks and telecommunications.
The first four semesters are common for all three courses, and in the fifth semester students choose one of the offered courses. Completing the studies successfully, students acquire the academic title Bachelor of Information Technology, and the completed course and achieved grades are stated further in the Diploma.
Mediterranean University
Information Technology
|
5034470
|
https://en.wikipedia.org/wiki/Criticism%20of%20Wikipedia
|
Criticism of Wikipedia
|
Most criticism of Wikipedia has been directed towards its content, its community of established users, and its processes. Critics have questioned its factual reliability, the readability and organization of the articles, the lack of methodical fact-checking, and its political bias. Concerns have also been raised about systemic bias along gender, racial, political and national lines. In addition, conflicts of interest arising from corporate campaigns to influence content have also been highlighted. Further concerns include the vandalism and partisanship facilitated by anonymous editing, clique behavior from contributors as well as administrators and other top figures, social stratification between a guardian class and newer users, excessive rule-making, edit warring, and uneven application of policies.
Criticism of content
The reliability of Wikipedia is often questioned. In "Wikipedia: The Dumbing Down of World Knowledge" (2010), journalist Edwin Black characterized the content of articles as a mixture of "truth, half-truth, and some falsehoods". Oliver Kamm, in "Wisdom?: More like Dumbness of the Crowds" (2007), said that articles usually are dominated by the loudest and most persistent editorial voices or by an interest group with an ideological "axe to grind".
In his article "The 'Undue Weight' of Truth on Wikipedia" (2012), Timothy Messer–Kruse criticized the undue-weight policy that deals with the relative importance of sources, observing that it showed Wikipedia's goal was not to present correct and definitive information about a subject but to present the majority opinion of the sources cited. In their article "You Just Type in What You are Looking for: Undergraduates' Use of Library Resources vs. Wikipedia" (2012) in an academic librarianship journal, the authors noted another author's point that omissions within an article might give the reader false ideas about a topic, based upon the incomplete content of Wikipedia.
Wikipedia is sometimes characterized as having a hostile editing environment. In Common Knowledge?: An Ethnography of Wikipedia (2014), Dariusz Jemielniak, a steward for Wikimedia Foundation projects, stated that the complexity of the rules and laws governing editorial content and the behavior of the editors is a burden for new editors and a licence for the "office politics" of disruptive editors. In a follow-up article, Jemielniak said that abridging and rewriting the editorial rules and laws of Wikipedia for clarity of purpose and simplicity of application would resolve the bureaucratic bottleneck of too many rules. In The Rise and Decline of an Open Collaboration System: How Wikipedia's Reaction to Popularity is Causing its Decline (2013), Aaron Halfaker said the over-complicated rules and laws of Wikipedia unintentionally provoked the decline in editorial participation that began in 2009—frightening away new editors who otherwise would contribute to Wikipedia.
There have also been works that describe the possible misuse of Wikipedia. In "Wikipedia or Wickedpedia?" (2008), the Hoover Institution said Wikipedia is an unreliable resource for correct knowledge, information, and facts about a subject, because, as an open source website, the editorial content of the articles is readily subjected to manipulation and propaganda. The 2014 edition of the Massachusetts Institute of Technology's official student handbook, Academic Integrity at MIT, informs students that Wikipedia is not a reliable academic source, stating, "the bibliography published at the end of the Wikipedia entry may point you to potential sources. However, do not assume that these sources are reliable use the same criteria to judge them as you would any other source. Do not consider the Wikipedia bibliography as a replacement for your own research."
Accuracy of information
Not authoritative
Wikipedia acknowledges that the encyclopedia should not be used as a primary source for research, either academic or informational. The British librarian Philip Bradley said, "the main problem is the lack of authority. With printed publications, the publishers have to ensure that their data are reliable, as their livelihood depends on it. But with something like this, all that goes out the window." Likewise, Robert McHenry, editor-in-chief of Encyclopædia Britannica from 1992 to 1997, said that readers of Wikipedia articles cannot know who wrote the article they are reading—it might have been written by an expert in the subject matter or by an amateur. In November 2015, Wikipedia co-founder Larry Sanger told Zach Schwartz in Vice: "I think Wikipedia never solved the problem of how to organize itself in a way that didn't lead to mob rule" and that since he left the project, "People that I would say are trolls sort of took over. The inmates started running the asylum."
Comparative study of science articles
In "Internet Encyclopaedias Go Head-to-head", a 2005 article published in the scientific journal Nature, the results of a blind experiment (single-blind study), which compared the factual and informational accuracy of entries from Wikipedia and the Encyclopædia Britannica, were reported. The 42-entry sample included science articles and biographies of scientists, which were compared for accuracy by anonymous academic reviewers; they found that the average Wikipedia entry contained four errors and omissions, while the average Encyclopædia Britannica entry contained three errors and omissions. The study concluded that Wikipedia and Britannica were comparable in terms of the accuracy of its science entries. Nevertheless, the reviewers had two principal criticisms of the Wikipedia science entries: (i) thematically confused content, without an intelligible structure (order, presentation, interpretation); and (ii) that undue weight is given to controversial, fringe theories about the subject matter.
The dissatisfaction of the Encyclopædia Britannica editors led to Nature publishing additional survey documentation that substantiated the results of the comparative study. Based upon the additional documents, Encyclopædia Britannica denied the validity of the study, stating it was flawed, because the Britannica extracts were compilations that sometimes included articles written for the youth version of the encyclopedia. In turn, Nature acknowledged that some Britannica articles were compilations, but denied that such editorial details invalidated the conclusions of the comparative study of the science articles.
The editors of Britannica also said that while the Nature study showed that the rate of error between the two encyclopedias was similar, the errors in a Wikipedia article usually were errors of fact, while the errors in a Britannica article were errors of omission. According to the editors of Britannica, Britannica was more accurate than Wikipedia in that respect. Subsequently, Nature magazine rejected the Britannica response with a rebuttal of the editors' specific objections about the research method of the study.
Lack of methodical fact-checking
Inaccurate information that is not obviously false may persist in Wikipedia for a long time before it is challenged. The most prominent cases reported by mainstream media involved biographies of living people.
The Wikipedia Seigenthaler biography incident demonstrated that the subject of a biographical article must sometimes fix blatant lies about his own life. In May 2005, an anonymous user edited the biographical article on American journalist and writer John Seigenthaler so that it contained several false and defamatory statements. The inaccurate claims went unnoticed from May until September 2005 when they were discovered by Victor S. Johnson Jr., a friend of Seigenthaler. Wikipedia content is often mirrored at sites such as Answers.com, which means that incorrect information can be replicated alongside correct information through a number of web sources. Such information can thereby develop false authority due to its presence at such sites.
In another example, on March 2, 2007, MSNBC.com reported that then-New York Senator Hillary Clinton had been incorrectly listed for 20 months in her Wikipedia biography as having been valedictorian of her class of 1969 at Wellesley College, when in fact she was not (though she did speak at commencement). The article included a link to the Wikipedia edit, where the incorrect information was added on July 9, 2005. The inaccurate information was removed within 24 hours after the MSNBC.com report appeared.
Attempts to perpetrate hoaxes may not be confined to editing existing Wikipedia articles, but can also include creating new articles. In October 2005, Alan Mcilwraith, a call center worker from Scotland, created a Wikipedia article in which he wrote that he was a highly decorated war hero. The article was quickly identified as a hoax by other users and deleted.
There have also been instances of users deliberately inserting false information into Wikipedia in order to test the system and demonstrate its alleged unreliability. Gene Weingarten, a journalist, ran such a test in 2007, in which he inserted false information into his own Wikipedia article; it was removed 27 hours later by a Wikipedia editor. Wikipedia considers the deliberate insertion of false and misleading information to be vandalism.
Neutral point of view and conflicts of interest
Wikipedia regards the concept of a neutral point of view as one of its non-negotiable principles; however, it acknowledges that such a concept has its limitations its NPOV policy states that articles should be "as far as possible" written "without editorial bias". Mark Glaser, a journalist, also wrote that this may be an impossible ideal due to the inevitable biases of editors. Research has shown that articles can maintain bias in spite of the neutral point of view policy through word choice, the presentation of opinions and controversial claims as facts, and framing bias.
In August 2007, a tool called WikiScanner—developed by Virgil Griffith, a visiting researcher from the Santa Fe Institute in New Mexico—was released to match edits to the encyclopedia by non-registered users with an extensive database of IP addresses. News stories appeared about IP addresses from various organizations such as the Central Intelligence Agency, the National Republican Congressional Committee, the Democratic Congressional Campaign Committee, Diebold, Inc. and the Australian government being used to make edits to Wikipedia articles, sometimes of an opinionated or questionable nature. Another story stated that an IP address from the BBC itself had been used to vandalize the article on George W. Bush. The BBC quoted a Wikipedia spokesperson as praising the tool: "We really value transparency and the scanner really takes this to another level. Wikipedia Scanner may prevent an organisation or individuals from editing articles that they're really not supposed to." Not everyone hailed WikiScanner as a success for Wikipedia. Oliver Kamm, in a column for The Times, argued instead that:
The WikiScanner is thus an important development in bringing down a pernicious influence on our intellectual life. Critics of the web decry the medium as the cult of the amateur. Wikipedia is worse than that; it is the province of the covert lobby. The most constructive course is to stand on the sidelines and jeer at its pretensions.
WikiScanner reveals conflicts of interest only when the editor does not have a Wikipedia account and their IP address is used instead. Conflict-of-interest editing done by editors with accounts is not detected, since those edits are anonymous to everyone except some Wikipedia administrators.
Scientific disputes
The 2005 Nature study also gave two brief examples of challenges that Wikipedian science writers purportedly faced on Wikipedia. The first concerned the addition of a section on violence to the schizophrenia article, which was little more than a "rant" about the need to lock people up, in the view of one of the article's regular editors, neuropsychologist Vaughan Bell. He said that editing it stimulated him to look up the literature on the topic.
Another dispute involved the climate researcher William Connolley, a Wikipedia editor who was opposed by others. The topic in this second dispute was "language pertaining to the greenhouse effect", and The New Yorker reported that this dispute, which was far more protracted, had led to arbitration, which took three months to produce a decision. The outcome of arbitration was for Connolley to be restricted to undoing edits on articles once per day.
Exposure to political operatives and advocates
While Wikipedia policy requires articles to have a neutral point of view, it is not immune from attempts by outsiders (or insiders) with an agenda to place a spin on articles. In January 2006, it was revealed that several staffers of members of the U.S. House of Representatives had embarked on a campaign to cleanse their respective bosses' biographies on Wikipedia, as well as inserting negative remarks on political opponents. References to a campaign promise by Martin Meehan to surrender his seat in 2000 were deleted, and negative comments were inserted into the articles on United States Senator Bill Frist of Tennessee, and Eric Cantor, a congressman from Virginia. Numerous other changes were made from an IP address assigned to the House of Representatives. In an interview, Wikipedia co-founder Jimmy Wales remarked that the changes were "not cool".
Larry Delay and Pablo Bachelet wrote that from their perspective, some articles dealing with Latin American history and groups (such as the Sandinistas and Cuba) lack political neutrality and are written from a sympathetic Marxist perspective which treats socialist dictatorships favorably at the expense of alternative positions.
In 2008, the pro-Israel group Committee for Accuracy in Middle East Reporting in America (CAMERA) organized an e-mail campaign to encourage readers to correct perceived Israel-related biases and inconsistencies in Wikipedia. CAMERA argued the excerpts were unrepresentative and that it had explicitly campaigned merely "toward encouraging people to learn about and edit the online encyclopedia for accuracy". Defenders of CAMERA and the competing group, Electronic Intifada, went into mediation. Israeli diplomat David Saranga said Wikipedia is generally fair in regard to Israel. When it was pointed out that the entry on Israel mentioned the word "occupation" nine times, whereas the entry on the Palestinian people mentioned "terror" only once, he responded, "It means only one thing: Israelis should be more active on Wikipedia. Instead of blaming it, they should go on the site much more, and try and change it."
Israeli political commentator Haviv Rettig Gur, reviewing widespread perceptions in Israel of systemic bias in Wikipedia articles, has argued that there are deeper structural problems creating this bias: anonymous editing favors biased results, especially if the editors organize concerted campaigns of defamation as has been done in articles dealing with Arab-Israeli issues, and current Wikipedia policies, while well-meant, have proven ineffective in handling this.
On August 31, 2008, The New York Times ran an article detailing the edits made to the biography of Alaska governor Sarah Palin in the wake of her nomination as the running mate of Arizona Senator John McCain. During the 24 hours before the McCain campaign announcement, 30 edits, many of them adding flattering details, were made to the article by the user "Young_Trigg". This person later acknowledged working on the McCain campaign, and having several other user accounts.
In November 2007, libelous accusations were made against two politicians from southwestern France, Jean-Pierre Grand and Hélène Mandroux-Colas, on their Wikipedia biographies. Grand asked the president of the French National Assembly and Prime Minister to reinforce the legislation on the penal responsibility of Internet sites and of authors who peddle false information in order to cause harm. Senator Jean Louis Masson then requested the Minister of Justice to tell him whether it would be possible to increase the criminal responsibilities of hosting providers, site operators, and authors of libelous content; the minister declined to do so, recalling the existing rules in the LCEN law (see Internet censorship in France).
On August 25, 2010, the Toronto Star reported that the Canadian "government is now conducting two investigations into federal employees who have taken to Wikipedia to express their opinion on federal policies and bitter political debates."
In 2010, Al Jazeera's Teymoor Nabili suggested that the article Cyrus Cylinder had been edited for political purposes by "an apparent tussle of opinions in the shadowy world of hard drives and 'independent' editors that comprise the Wikipedia industry." He suggested that, after the Iranian presidential election of 2009 and ensuing "anti-Iranian activities", a "strenuous attempt to portray the cylinder as nothing more than the propaganda tool of an aggressive invader" was visible. The edits following his analysis of the edits during 2009 and 2010, represented "a complete dismissal of the suggestion that the cylinder, or Cyrus' actions, represent concern for human rights or any kind of enlightened intent," in stark contrast to Cyrus' own reputation as documented in the Old Testament and the people of Babylon.
Commandeering or sanitizing articles
Articles of particular interest to an editor or group of editors are sometimes modified based on these editors' respective points of views. Some companies and organizations—such as Sony, Diebold, Nintendo, Dell, the CIA, and the Church of Scientology—as well as individuals, such as United States Congressional staffers, were all shown to have modified the Wikipedia pages about themselves in order to present a point of view that describes them positively; these organizations may have editors who revert negative changes as soon as these changes are submitted.
The Chinese Wikipedia article on the Tiananmen Square massacre was rewritten to describe it as necessary to "quell the counterrevolutionary riots" and Taiwan was described as "a province in the People’s Republic of China". According to the BBC, "there are indications that [such edits] are not all necessarily organic, nor random" and were in fact orchestrated by the Chinese Communist Party.
Quality of presentation
Quality of articles on U.S. history
In the essay, "Can History be Open Source?: Wikipedia and the Future of the Past" (2006), the academic historian Roy Rosenzweig criticized the encyclopedic content and writing style used in Wikipedia, for not distinguishing subjects that are important from subjects that are merely sensational; that Wikipedia is "surprisingly accurate in reporting names, dates, and events in U.S. history"; and that most of the factual errors he found "were small and inconsequential", some of which "simply repeat widely held, but inaccurate, beliefs", which are also repeated in the Microsoft Encarta encyclopedia and in the Encyclopædia Britannica. Yet Rosenzweig's major criticism is that:
Good historical writing requires not just factual accuracy but also a command of the scholarly literature, persuasive analysis and interpretations, and clear and engaging prose. By those measures, American National Biography Online easily outdistances Wikipedia.
Rosenzweig also criticized the "waffling encouraged by the [neutral point of view] policy [which] means that it is hard to discern any overall interpretive stance in Wikipedia history [articles]", and quoted the historical conclusion of the biography of William Clarke Quantrill, a Confederate guerrilla in the United States Civil War, as an example of weasel-word waffling:
Some historians... remember [Quantrill] as an opportunistic, bloodthirsty outlaw, while other [historians] continue to view him as a daring soldier and local folk hero.
Rosenzweig contrasted Wikipedia's Abraham Lincoln article with that of James M. McPherson's article on Lincoln in American National Biography Online. He reports that each entry was essentially accurate in covering the major episodes of President Lincoln's life. McPherson—a Princeton professor and winner of the Pulitzer Prize—showed "richer contextualization", as well as "his artful use of quotations to capture Lincoln’s voice" and "his ability to convey a profound message in a handful of words." By contrast Wikipedia's prose was "both verbose and dull" and thus difficult to read, because "the skill and confident judgment of a seasoned historian" are absent from the antiquarian writing style of Wikipedia, as opposed to the writing style used by professional historians in the American Heritage magazine. It was also mentioned that while Wikipedia usually provides many references, these are not the most accurate references.
Quality of medical articles
In the article "Wikipedia Cancer Information Accurate," a study of medical articles, Yaacov Lawrence of the Kimmel Cancer Center of Thomas Jefferson University found that the cancer entries were mostly accurate. However, Wikipedia's articles were written in college-level prose, as opposed to in the easier-to-understand ninth-grade-level prose found in the Physician Data Query (PDQ) of the National Cancer Institute. According to Lawrence, "Wikipedia’s lack of readability may reflect its varied origins and haphazard editing."
In its 2007 article "Fact or Fiction? Wikipedia’s Variety of Contributors is Not Only a Strength," the magazine The Economist said the quality of the writing in Wikipedia articles usually indicates the quality of the editorial content: "Inelegant or ranting prose usually reflects muddled thoughts and incomplete information."
The Wall Street Journal debate
In the September 12, 2006, edition of The Wall Street Journal, Jimmy Wales debated with Dale Hoiberg, editor-in-chief of Encyclopædia Britannica. Hoiberg focused on a need for expertise and control in an encyclopedia and cited Lewis Mumford that overwhelming information could "bring about a state of intellectual enervation and depletion hardly to be distinguished from massive ignorance." Wales emphasized Wikipedia's differences, and asserted that openness and transparency lead to quality. Hoiberg said he "had neither the time nor space to respond to [criticisms]" and "could corral any number of links to articles alleging errors in Wikipedia", to which Wales responded: "No problem! Wikipedia to the rescue with a fine article", and included a link to the Wikipedia article about criticism of Wikipedia.
Systemic bias in coverage
Wikipedia has been accused of systemic bias, which is to say its general nature leads, without necessarily any conscious intention, to the propagation of various prejudices. Although many articles in newspapers have concentrated on minor factual errors in Wikipedia articles, there are also concerns about large-scale, presumably unintentional effects from the increasing influence and use of Wikipedia as a research tool at all levels. In an article in the Times Higher Education magazine (London), philosopher Martin Cohen describes Wikipedia as having "become a monopoly" with "all the prejudices and ignorance of its creators," which he calls a "youthful cab-driver's" perspective. Cohen concludes that "[t]o control the reference sources that people use is to control the way people comprehend the world. Wikipedia may have a benign, even trivial face, but underneath may lie a more sinister and subtle threat to freedom of thought." That freedom is undermined by what he sees as what matters on Wikipedia, "not your sources but the 'support of the community'."
Researchers from Washington University in St. Louis developed a statistical model to measure systematic bias in the behavior of Wikipedia's users regarding controversial topics. The authors focused on behavioral changes of the encyclopedia's administrators after assuming the post, writing that systematic bias occurred after the fact.
Critics also point to the tendency to cover topics in detail disproportionate to their importance. For example, Stephen Colbert once mockingly praised Wikipedia for having a longer entry on 'lightsabers' than it does on the 'printing press'. Dale Hoiberg, the editor-in-chief of Encyclopædia Britannica, said "People write of things they're interested in, and so many subjects don't get covered; and news events get covered in great detail. In the past, the entry on Hurricane Frances was more than five times the length of that on Chinese art, and the entry on Coronation Street was twice as long as the article on Tony Blair."
This approach of comparing two articles, one about a traditionally encyclopedic subject and the other about one more popular with the crowd, has been called "wikigroaning". A defense of inclusion criteria is that the encyclopedia's longer coverage of pop culture does not deprive the more "worthy" or serious subjects of space.
Notability of article topics
Wikipedia's notability guidelines, which are used by editors to determine if a subject merits its own article, and the application thereof, are the subject of much criticism. A Wikipedia editor rejected a draft article about Donna Strickland before she won the Nobel Prize in Physics in 2018, because no independent sources were given to show that Strickland was sufficiently notable by Wikipedia's standards. Journalists highlighted this as an indicator of the limited visibility of women in science compared to their male colleagues.
The gender bias on Wikipedia is well documented, and has prompted a movement to increase the number of notable women on Wikipedia through the Women in Red WikiProject. In an article entitled "Seeking Disambiguation", Annalisa Merelli interviewed Catalina Cruz, a candidate for office in Queens, New York in the 2018 election who had the notorious SEO disadvantage of having the same name as a porn star with a Wikipedia page. Merelli also interviewed the Wikipedia editor who wrote the candidate's ill-fated article (which was deleted, then restored, after she won the election). She described the Articles for Deletion process, and pointed to other candidates who had pages on the English Wikipedia despite never having held office.
Novelist Nicholson Baker, critical of deletionism, writes: "There are quires, reams, bales of controversy over what constitutes notability in Wikipedia: nobody will ever sort it out."
Journalist Timothy Noah wrote of his treatment: "Wikipedia's notability policy resembles U.S. immigration policy before 9/11: stringent rules, spotty enforcement". In the same article, Noah mentions that the Pulitzer Prize-winning writer Stacy Schiff was not considered notable enough for a Wikipedia entry until she wrote her article "Know it All" about the Wikipedia Essjay controversy.
On a more generic level, a 2014 study found no correlation between characteristics of a given Wikipedia page about an academic and the academic's notability as determined by citation counts. The metrics of each Wikipedia page examined included length, number of links to the page from other articles, and number of edits made to the page. This study also found that Wikipedia did not cover notable ISI highly cited researchers properly.
In 2020, Wikipedia was criticized for the amount of time it took for an article about Theresa Greenfield, a candidate for the 2020 United States Senate election in Iowa, to leave Wikipedia's Articles for Creation process and become published. Particularly, the criteria for notability were criticized, with The Washington Post reporting: "Greenfield is an uniquely tricky case for Wikipedia because she doesn’t have the background that most candidates for major political office typically have (like prior government experience or prominence in business). Even if Wikipedia editors could recognize she was prominent, she had a hard time meeting the official criteria for notability." Jimmy Wales also criticized the long process on his talk page.
Partisanship
U.S. commentators, mostly politically conservative ones, have suggested that a politically liberal viewpoint is predominant in the English Wikipedia. Andrew Schlafly created Conservapedia because of his perception that Wikipedia contained a liberal bias. Conservapedia's editors have compiled a list of alleged examples of liberal bias in Wikipedia. In 2007, an article in The Christian Post criticised Wikipedia's coverage of intelligent design, saying it was biased and hypocritical. Lawrence Solomon of National Review considered the Wikipedia articles on subjects like global warming, intelligent design, and Roe v. Wade all to be slanted in favor of liberal views. In a September 2010 issue of the conservative weekly Human Events, Rowan Scarborough presented a critique of Wikipedia's coverage of American politicians prominent in the approaching U.S. midterm elections as evidence of systemic liberal bias. Scarborough compares the biographical articles of liberal and conservative opponents in Senate races in the Alaska Republican primary and the Delaware and Nevada general election, emphasizing the quantity of negative coverage of Tea Party movement-endorsed candidates. He also cites criticism by Lawrence Solomon and quotes in full the lead section of Wikipedia's article on Conservapedia as evidence of an underlying bias.
In 2006, Wikipedia co-founder Jimmy Wales said: "The Wikipedia community is very diverse, from liberal to conservative to libertarian and beyond. If averages mattered, and due to the nature of the wiki software (no voting) they almost certainly don't, I would say that the Wikipedia community is slightly more liberal than the U.S. population on average, because we are global and the international community of English speakers is slightly more liberal than the U.S. population. There are no data or surveys to back that." Shane Greenstein and Feng Zhu analyzed 2012 era Wikipedia articles on U.S. politics, going back a decade, and wrote a study arguing the more contributors there were to an article, the less biased the article would be, and that based on a study of frequent collocations fewer articles "leaned Democrat" than was the case in Wikipedia's early years. Sorin Adam Matei, a professor at Purdue University, said that "for certain political topics, there's a central-left bias. There's also a slight, when it comes to more political topics, counter-cultural bias. It's not across the board, and it's not for all things."
In November 2021, the English Wikipedia's entry for "Mass killings under communist regimes" was nominated for deletion, with some editors arguing that it has "a biased 'anti-Communist' point of view", that "it should not resort to 'simplistic presuppositions that events are driven by any specific ideology", and that "by combining different elements of research to create a 'synthesis', this constitutes original research and therefore breaches Wikipedia rules." This was criticized by historian Robert Tombs, who called it "morally indefensible, at least as bad as Holocaust denial, because 'linking ideology and killing' is the very core of why these things are important. I have read the Wikipedia page, and it seems to me careful and balanced. Therefore attempts to remove it can only be ideologically motivated – to whitewash Communism." Other Wikipedia editors and users on social media opposed the deletion of the article. The article's deletion nomination received considerable attention from conservative media. The Heritage Foundation, an American conservative think tank, called the arguments made in favor of deletion "absurd and ahistorical". On December 1, 2021, a panel of four administrators found that the discussion yielded no consensus, meaning that the status quo was retained, and the article was not deleted. The article's deletion discussion was the largest in Wikipedia's history.
National or corporate bias
In 2008, Tim Anderson, a senior lecturer in political economy at the University of Sydney, said Wikipedia administrators display an American-focused bias in their interactions with editors and their determinations of which sources are appropriate for use on the site. Anderson was outraged after several of the sources he used in his edits to the Hugo Chávez article, including Venezuela Analysis and Z Magazine, were disallowed as "unusable". Anderson also described Wikipedia's neutral point of view policy to ZDNet Australia as "a facade" and that Wikipedia "hides behind a reliance on corporate media editorials".
Racial bias
Wikipedia has been charged with having a systemic racial bias in its coverage, due to an underrepresentation of people of colour as editors. The President of Wikimedia D.C., James Hare, noted that "a lot of black history is left out" of Wikipedia, due to articles predominately being written by white editors. Articles that do exist on African topics are, according to some critics, largely edited by editors from Europe and North America and thus reflect their knowledge and consumption of media, which "tend to perpetuate a negative image" of Africa. Maira Liriano, of the Schomburg Center for Research in Black Culture, has argued that the lack of information regarding black history on Wikipedia "makes it seem like it's not important." San Francisco Poet Laureate Alejandro Murguía has stressed how it is important for Latinos to be part of Wikipedia "because it is a major source of where people get their information." In 2010, an analysis of Wikipedia edits revealed that Asia, as the most populous continent, was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented.
In 2018, the Southern Poverty Law Center criticized Wikipedia for being "vulnerable to manipulation by neo-Nazis, white nationalists and racist academics seeking a wider audience for extreme views." According to the SPLC, "[c]ivil POV-pushers can disrupt the editing process by engaging other users in tedious and frustrating debates or tie up administrators in endless rounds of mediation. Users who fall into this category include racialist academics and members of the human biodiversity, or HBD, blogging community. ... In recent years, the proliferation of far-right online spaces, such as white nationalist forums, alt-right boards and HBD blogs, has created a readymade pool of users that can be recruited to edit on Wikipedia en masse. ... The presence of white nationalists and other far-right extremists on Wikipedia is an ongoing problem that is unlikely to go away in the near future given the rightward political shift in countries where the majority of the site’s users live. The SPLC cited the article "Race and intelligence" as an example of the alt-right influence on Wikipedia, stating that at that time the article presented a "false balance" between fringe racialist views and the "mainstream perspective in psychology."
Gender bias and sexism
Wikipedia has a longstanding controversy concerning gender bias and sexism. Gender bias on Wikipedia refers to the finding that between 84 and 91 percent of Wikipedia editors are male, which allegedly leads to systemic bias. Wikipedia has been criticized by some journalists and academics for lacking not only women contributors but also extensive and in-depth encyclopedic attention to many topics regarding gender. Sue Gardner, former executive director of the Foundation, said that increasing diversity was about making the encyclopedia "as good as it could be". Factors cited as possibly discouraging women from editing included the "obsessive fact-loving realm", associations with the "hard-driving hacker crowd", and the necessity to be "open to very difficult, high-conflict people, even misogynists."
In 2011, the Wikimedia Foundation set a goal of increasing the proportion of female contributors to 25 percent by 2015. In August 2013, Gardner conceded defeat: "I didn't solve it. We didn't solve it. The Wikimedia Foundation didn't solve it. The solution won't come from the Wikimedia Foundation." In August 2014, Wikipedia co-founder Jimmy Wales acknowledged in a BBC interview the failure of Wikipedia to fix the gender gap and announced the Wikimedia Foundation's plans for "doubling down" on the issue. Wales said the Foundation would be open to more outreach and more software changes.
Writing in the Notices of the American Mathematical Society, Marie Vitulli states that "mathematicians have had a difficult time when writing biographies of women mathematicians," and she describes the aggressiveness of editors and administrators in deleting such articles.
Criticism was presented on this topic in The Signpost (WP:THREATENING2MEN).
Institutional bias
Wikipedia has been criticized for reflecting the bias and influence of media that are seen as reliable due to their dominance, and for being a site of conflict between entrenched or special institutional interests. Public relations firms and interest lobbies, corporate, political and otherwise, have been accused of working systemically to distort Wikipedia's articles in their respective interests.
Firearms-related articles
Wikipedia has been criticized for issues related to bias in firearms-related articles. According to critics, systematic bias arises from the tendency of the editors most active in maintaining firearms-related articles to also be gun enthusiasts, and firearms-related articles are dominated by technical information while issues of the social impact and regulation of firearms are relegated to separate articles. Communications were facilitated by a "WikiProject," called "WikiProject Firearms", an on-wiki group of editors with a common interest. The alleged pro-gun bias drew increased attention after the Stoneman Douglas High School shooting in Parkland, Florida, in February 2018. The Wikimedia Foundation defended itself from allegations of being host to opinion-influencing campaigns of pro-gun groups, saying that the contents are always being updated and improved.
Skeptical bias
In 2014, supporters of holistic healing and energy psychology began a Change.org petition asking for "true scientific discourse" on Wikipedia, complaining that "much of the information [on Wikipedia] related to holistic approaches to healing is biased, misleading, out-of-date, or just plain wrong". In response, Jimmy Wales said Wikipedia covers only works that are published in respectable scientific journals.
Wikipedia has been accused of being biased against views outside of the scientific mainstream due to influence from the skeptical movement. Social scientist Brian Martin examined the influence of skeptics on Wikipedia by looking for parallels between Wikipedia entries and characteristic techniques used by skeptics, finding that the result "does not prove that Skeptics are shaping Wikipedia but is compatible with that possibility."
Sexual content
Wikipedia has been criticized for allowing graphic sexual content such as images and videos of masturbation and ejaculation as well as photos from hardcore pornographic films found on its articles. Child protection campaigners say graphic sexual content appears on many Wikipedia entries, displayed without any warning or age verification.
The Wikipedia article Virgin Killer—a 1976 album from German heavy metal band Scorpions—features a picture of the album's original cover, which depicts a naked prepubescent girl. In December 2008, the Internet Watch Foundation, a nonprofit, nongovernment-affiliated organization, added the article to its blacklist, criticizing the inclusion of the picture as "distasteful". As a result, access to the article was blocked for four days by most Internet service providers in the United Kingdom. Seth Finkelstein writing for The Guardian argues that the debate over the album cover masks a structural lack of accountability on Wikipedia, in particular when it comes to sexual content. For example, the deletion by Wikipedia co-founder Jimmy Wales of images of lolicon versions of the character Wikipe-tan created a minor controversy on the topic. The deletion was taken as endorsement of the non-lolicon images of Wikipe-tan, which Wales later had to explicitly deny: "I don't like Wikipe-tan and never have." Finkelstein sees Wikipedia as composed of fiefdoms, which makes it difficult for the Wikipedia community to deal with such issues, and sometimes necessitates top-down intervention.
Exposure to vandals
As an online encyclopedia which almost anyone can edit, Wikipedia has had problems with vandalism of articles, which range from blanking articles to inserting profanities, hoaxes, or nonsense. Wikipedia has a range of tools available to users and administrators in order to fight against vandalism, including blocking and banning of vandals and automated bots that detect and repair vandalism. Supporters of the project argue that the vast majority of vandalism on Wikipedia is reverted within a short time, and a study by Fernanda Viégas of the MIT Media Lab and Martin Wattenberg and Kushal Dave of IBM Research found that most vandal edits were reverted within around five minutes; however they state that "it is essentially impossible to find a crisp definition of vandalism." While most instances of page blanking or the addition of offensive material are soon reverted, less obvious vandalism, or vandalism to a little viewed article, has remained for longer periods.
A 2007 conference paper estimated that 1 in 271 articles had some "damaged" content. Most of the damage involved nonsense; 20% involved actual misinformation. It reported that 42% of damage gets repaired before any reader clicked on the article, and 80% before 30 people did so.
Privacy concerns
Most privacy concerns refer to cases of government or employer data gathering; or to computer or electronic monitoring; or to trading data between organizations. According to James Donnelly and Jenifer Haeckl, "the Internet has created conflicts between personal privacy, commercial interests and the interests of society at large". Balancing the rights of all concerned as technology alters the social landscape will not be easy. It "is not yet possible to anticipate the path of the common law or governmental regulation" regarding this problem.
The concern in the case of Wikipedia is the right of a private citizen to remain private; to remain a "private citizen" rather than a "public figure" in the eyes of the law. It is somewhat of a battle between the right to be anonymous in cyberspace and the right to be anonymous in real life ("meatspace"). A particular problem occurs in the case of an individual who is relatively unimportant and for whom there exists a Wikipedia page against their wishes.
In 2005, Agence France-Presse quoted Daniel Brandt, the Wikipedia Watch owner, as saying that "the basic problem is that no one, neither the trustees of Wikimedia Foundation, nor the volunteers who are connected with Wikipedia, consider themselves responsible for the content."
In January 2006, a German court ordered the German Wikipedia shut down within Germany because it stated the full name of Boris Floricic, aka "Tron", a deceased hacker who was formerly with the Chaos Computer Club. More specifically, the court ordered that the URL within the German domain () may no longer redirect to the encyclopedia's servers in Florida at although German readers were still able to use the US-based URL directly, and there was virtually no loss of access on their part. The court order arose out of a lawsuit filed by Floricic's parents, demanding that their son's surname be removed from Wikipedia. The next month on February 9, 2006, the injunction against Wikimedia Deutschland was overturned, with the court rejecting the notion that Tron's right to privacy or that of his parents were being violated.
Criticism of the community
Role of Jimmy Wales
The community of Wikipedia editors has been criticized for placing an irrational emphasis on Jimmy Wales as a person. Wales's role in personally determining the content of some articles has also been criticized as contrary to the independent spirit that Wikipedia supposedly has gained. In early 2007, Wales dismissed the criticism of the Wikipedia model: "I am unaware of any problems with the quality of discourse on the site. I don't know of any higher-quality discourse anywhere."
Conflict of interest cases
A Business Insider article wrote about a controversy in September 2012 where two Wikimedia Foundation employees were found to have been "running a PR business on the side and editing Wikipedia on behalf of their clients."
Unfair treatment of women
In 2015, The Atlantic published a story by Emma Paling about a contributor who was able to obtain no relief from the Arbitration Committee for off-site harassment. Paling quotes a then-sitting Arbitrator speaking about bias against women on the Arbitration Committee.
In the online magazine Slate, David Auerbach criticized the Arbitration Committee's decision to block a woman indefinitely without simultaneously blocking her "chief antagonists" in the December 2014 Gender Gap Task Force case. He mentions his own experience with what he calls "the unblockables"abrasive editors who can get away with complaints against them because there are enough supporters, and that he had observed a "general indifference or even hostility to outside opinion" on the English Wikipedia. Auerbach considers the systematic defense of vulgar language use by insiders as a symptom of the toxicity he describes.
In January 2015, The Guardian reported that the Arbitration Committee had banned five feminist editors from gender-related articles on a case related to the Gamergate controversy, while including quotes from a Wikipedia editor alleging unfair treatment. Other commentators, including from Gawker and ThinkProgress, provided additional analysis while sourcing from The Guardians story. Reports in The Washington Post, Slate and Social Text described these articles as "flawed" or factually inaccurate, pointing out that the Arbitration case had not concluded as at the time of publishing; no editor had been banned. After the result was published, Gawker wrote that "ArbCom ruled to punish six editors who could be broadly classified as 'anti-Gamergate' and five who are 'pro-Gamergate'." All of the supposed were among the editors punished, with one of them being the sole editor banned due to this case. An article called "ArbitrationGate" regarding this situation was created (and quickly deleted) on Wikipedia, while The Guardian later issued a correction to their article. The Committee and the Wikimedia Foundation issued press statements that the Gamergate case was in response to the atmosphere of the Gamergate article resembling a "battlefield" due to "various sides of the discussion [having] violated community policies and guidelines on conduct", and that the committee was fulfilling its role to "uphold a civil, constructive atmosphere" on Wikipedia. The committee also wrote that it "does not rule on the content of articles, or make judgements on the personal views of parties to the case". Michael Mandiberg, writing in Social Text, remained unconvinced.
Croatian Wikipedia
On the Croatian Wikipedia, a group of administrators were criticized for blocking Wikipedians in favor of LGBT rights. In an interview given to Index.hr, Robert Kurelić, a professor of history at the Juraj Dobrila University of Pula, has commented that "the Croatian Wikipedia is only a tool used by its administrators to promote their own political agendas, giving false and distorted facts". As two particularly prominent examples he listed the Croatian Wikipedia's coverage of Istrianism (a regionalist movement in Istria, a region mostly located in Croatia), defined as a "movement fabricated to reduce the number of Croats", and antifašizam (anti-fascism), which according to him is defined as the opposite of what it really means. Kurelić further advised "that it would be good if a larger number of people got engaged and started writing on Wikipedia", because "administrators want to exploit high-school and university students, the most common users of Wikipedia, to change their opinions and attitudes, which presents a serious issue".
In 2013, Croatia's Minister of Science, Education and Sports at the time, Željko Jovanović, called for pupils and students in Croatia to avoid using the Croatian Wikipedia. In an interview given to Novi list, Jovanović said that "the idea of openness and relevance as a knowledge source that Wikipedia could and should represent has been completely discredited – which, for certain, has never been the goal of Wikipedia's creators nor the huge number of people around the world who share their knowledge and time using that medium. Croatian pupils and students have been wronged by this, so we have to warn them, unfortunately, that a large part of the content of the Croatian version of Wikipedia is not only dubious but also [contains] obvious forgeries, and therefore we invite them to use more reliable sources of information, which include Wikipedia in English and in other major languages of the world." Jovanović has also commented on the Croatian Wikipedia editors – calling them a "minority group that has usurped the right to edit the Croatian-language Wikipedia".
Lack of verifiable identities
Scandals involving administrators and arbitrators
David Boothroyd, a Wikipedia editor and a Labour Party (United Kingdom) member, created controversy in 2009, when Wikipedia Review contributor "Tarantino" discovered that he committed sockpuppeting, editing under the accounts "Dbiv", "Fys", and "Sam Blacketer", none of which acknowledged his real identity. After earning Administrator status with one account, then losing it for inappropriate use of the administrative tools, Boothroyd regained Administrator status with the Sam Blacketer sockpuppet account in April 2007. Later in 2007, Boothroyd's Sam Blacketer account became part of the English Wikipedia's Arbitration Committee. Under the Sam Blacketer account, Boothroyd edited many articles related to United Kingdom politics, including that of rival Conservative Party leader David Cameron. Boothroyd then resigned as an administrator and as an arbitrator.
Essjay controversy
In July 2006, The New Yorker ran a feature by Stacy Schiff about "a highly credentialed Wikipedia editor". The initial version of the article included an interview with a Wikipedia administrator using the pseudonym Essjay, who described himself as a tenured professor of theology. Essjay's Wikipedia user page, now removed, said the following:
I am a tenured professor of theology at a private university in the eastern United States; I teach both undergraduate and graduate theology. I have been asked repeatedly to reveal the name of the institution, however, I decline to do so; I am unsure of the consequences of such an action, and believe it to be in my best interests to remain anonymous.
Essjay also said he held four academic degrees: Bachelor of Arts in Religious Studies (B.A.), Master of Arts in Religion (M.A.R.), Doctorate of Philosophy in Theology (Ph.D.), and Doctorate in Canon Law (JCD). Essjay specialized in editing articles about religion on Wikipedia, including subjects such as "the penitential rite, transubstantiation, the papal tiara"; on one occasion he was called in to give some "expert testimony" on the status of Mary in the Roman Catholic Church. In January 2007, Essjay was hired as a manager with Wikia, a wiki-hosting service founded by Wales and Angela Beesley. In February, Wales appointed Essjay as a member of the Wikipedia Arbitration Committee, a group with powers to issue binding rulings in disputes relating to Wikipedia.
In late February 2007, The New Yorker added an editorial note to its article on Wikipedia stating that it had learned that Essjay was Ryan Jordan, a 24-year-old college dropout from Kentucky with no advanced degrees and no teaching experience. Initially Jimmy Wales commented on the issue of Essjay's identity: "I regard it as a pseudonym and I don't really have a problem with it." Larry Sanger, co-founder of Wikipedia, responded to Wales on his Citizendium blog by calling Wales' initial reaction "utterly breathtaking, and ultimately tragic". Sanger said the controversy "reflects directly on the judgment and values of the management of Wikipedia."
Wales later issued a new statement saying he had not previously understood that "EssJay used his false credentials in content disputes." He added: "I have asked EssJay to resign his positions of trust within the Wikipedia community." Sanger responded the next day: "It seems Jimmy finds nothing wrong, nothing trust-violating, with the act itself of openly and falsely touting many advanced degrees on Wikipedia. But there most obviously is something wrong with it, and it's just as disturbing for Wikipedia's head to fail to see anything wrong with it."
On March 4, Essjay wrote on his user page that he was leaving Wikipedia, and he also resigned his position with Wikia. A subsequent article in The Courier-Journal (Louisville) suggested that the new résumé he had posted at his Wikia page was exaggerated. The March 19, 2007, issue of The New Yorker published a formal apology by Wales to the magazine and Stacy Schiff for Essjay's false statements.
Discussing the incident, the New York Times noted that the Wikipedia community had responded to the affair with "the fury of the crowd", and observed:
The Essjay episode underlines some of the perils of collaborative efforts like Wikipedia that rely on many contributors acting in good faith, often anonymously and through self-designated user names. But it also shows how the transparency of the Wikipedia process—all editing of entries is marked and saved—allows readers to react to suspected fraud.
The Essjay incident received extensive media coverage, including a national United States television broadcast on ABC's World News with Charles Gibson and the March 7, 2007, Associated Press story. The controversy has led to a proposal that users who say they possess academic qualifications should have to provide evidence before citing them in Wikipedia content disputes. The proposal was not accepted.
Anonymity
Wikipedia has been criticised for allowing editors to contribute anonymously (without a registered account and using an auto-generated IP-labeled account) or pseudonymously (using a registered account), with critics saying that this leads to a lack of accountability. This also sometimes leads to uncivil conduct in debates between Wikipedians. For privacy reasons, Wikipedia forbids editors to reveal information about another editor on Wikipedia.
Criticism of process
Level of debate, edit wars and harassment
The standard of debate on Wikipedia has been called into question by people who have noted that contributors can make a long list of salient points and pull in a wide range of empirical observations to back up their arguments, only to have them ignored completely on the site. An academic study of Wikipedia articles found that the level of debate among Wikipedia editors on controversial topics often degenerated into counterproductive squabbling:
For uncontroversial, "stable" topics self-selection also ensures that members of editorial groups are substantially well-aligned with each other in their interests, backgrounds, and overall understanding of the topics... For controversial topics, on the other hand, self-selection may produce a strongly misaligned editorial group. It can lead to conflicts among the editorial group members, continuous edit wars, and may require the use of formal work coordination and control mechanisms. These may include intervention by administrators who enact dispute review and mediation processes, [or] completely disallow or limit and coordinate the types and sources of edits.
In 2008, a team from the Palo Alto Research Center found that for editors who make between two and nine edits a month, the percentage of their edits being reverted had gone from 5% in 2004 to about 15%, and people who make only one edit a month were being reverted at a 25% rate. According to The Economist magazine (2008), "The behaviour of Wikipedia's self-appointed deletionist guardians, who excise anything that does not meet their standards, justifying their actions with a blizzard of acronyms, is now known as 'wiki-lawyering'." In regards to the decline in the number of Wikipedia editors since the 2007 policy changes, another study stated this was partly down to the way "in which newcomers are rudely greeted by automated quality control systems and are overwhelmed by the complexity of the rule system."
Another complaint about Wikipedia focuses on the efforts of contributors with idiosyncratic beliefs, who push their point of view in an effort to dominate articles, especially controversial ones. This sometimes results in revert wars and pages being locked down. In response, an Arbitration Committee has been formed on the English Wikipedia that deals with the worst alleged offenders—though a conflict resolution strategy is actively encouraged before going to this extent. Also, to stop the continuous reverting of pages, Jimmy Wales introduced a "three-revert rule", whereby those users who reverse the effect of others' contributions to one article more than three times in a 24-hour period may be blocked.
In a 2008 article in The Brooklyn Rail, Wikipedia contributor David Shankbone contended that he had been harassed and stalked because of his work on Wikipedia, had received no support from the authorities or the Wikimedia Foundation, and only mixed support from the Wikipedia community. Shankbone wrote, "If you become a target on Wikipedia, do not expect a supportive community."
David Auerbach, writing in Slate magazine, said:I am not exaggerating when I say it is the closest thing to Kafka’s The Trial I have ever witnessed, with editors and administrators giving conflicting and confusing advice, complaints getting "boomeranged" onto complainants who then face disciplinary action for complaining, and very little consistency in the standards applied. In my short time there, I repeatedly observed editors lawyering an issue with acronyms, only to turn around and declare "Ignore all rules!" when faced with the same rules used against them... The problem instead stems from the fact that administrators and longtime editors have developed a fortress mentality in which they see new editors as dangerous intruders who will wreck their beautiful encyclopedia, and thus antagonize and even persecute them.
Wikipedia has also been criticised for its weak enforcement against perceived toxicities among the editing community at various times. In one case a longtime editor was nearly driven to suicide following an online abuse from editors and a ban from the site before being rescued from the suicide attempt.
In order to address this problem Wikipedia planned to institute a new rule of conduct aimed at combating 'toxic behavior'. The development of the new rule of conduct would take place in two phases. The first will include setting policies for in-person and virtual events as well as policies for technical spaces including chat rooms and other Wikimedia projects. A second phase outlining enforcement when the rules are broken is planned to be approved by the end of 2020, according to Wikimedia board's plan.
Consensus and the "hive mind"
Oliver Kamm, in an article for The Times, said Wikipedia's reliance on consensus in forming its content was dubious:
Wikipedia seeks not truth but consensus, and like an interminable political meeting the end result will be dominated by the loudest and most persistent voices.
Wikimedia advisor Benjamin Mako Hill also talked about Wikipedia's disproportional representation of viewpoints, saying:
In Wikipedia, debates can be won by stamina. If you care more and argue longer, you will tend to get your way. The result, very often, is that individuals and organizations with a very strong interest in having Wikipedia say a particular thing tend to win out over other editors who just want the encyclopedia to be solid, neutral, and reliable. These less-committed editors simply have less at stake and their attention is more distributed.
Wikimedia trustee Dariusz Jemielniak says:
Tiring out one's opponent is a common strategy among experienced Wikipedians... I have resorted to it many times.
In his article, "Digital Maoism: The Hazards of the New Online Collectivism" (first published online by Edge: The Third Culture, May 30, 2006), computer scientist and digital theorist Jaron Lanier describes Wikipedia as a "hive mind" that is "for the most part stupid and boring", and asks, rhetorically, "why pay attention to it?" His thesis says:
The problem is in the way the Wikipedia has come to be regarded and used; how it's been elevated to such importance so quickly. And that is part of the larger pattern of the appeal of a new online collectivism that is nothing less than a resurgence of the idea that the collective is all-wise, that it is desirable to have influence concentrated in a bottleneck that can channel the collective with the most verity and force. This is different from representative democracy, or meritocracy. This idea has had dreadful consequences when thrust upon us from the extreme Right or the extreme Left in various historical periods. The fact that it's now being re-introduced today by prominent technologists and futurists, people who in many cases I know and like, doesn't make it any less dangerous.
Lanier also says the current economic trend is to reward entities that aggregate information, rather than those that actually generate content. In the absence of "new business models", the popular demand for content will be sated by mediocrity, thus reducing or even eliminating any monetary incentives for the production of new knowledge.
Lanier's opinions produced some strong disagreement. Internet consultant Clay Shirky noted that Wikipedia has many internal controls in place and is not a mere mass of unintelligent collective effort:
Neither proponents nor detractors of hive mind rhetoric have much interesting to say about Wikipedia itself, because both groups ignore the details... Wikipedia is best viewed as an engaged community that uses a large and growing number of regulatory mechanisms to manage a huge set of proposed edits... To take the specific case of Wikipedia, the Seigenthaler/Kennedy debacle catalyzed both soul-searching and new controls to address the problems exposed, and the controls included, inter alia, a greater focus on individual responsibility, the very factor "Digital Maoism" denies is at work.
Excessive rule-making
Various figures involved with the Wikimedia Foundation have argued that Wikipedia's increasingly complex policies and guidelines are driving away new contributors to the site. Former chair Kat Walsh has criticized the project in recent years, saying, "It was easier when I joined in 2004... Everything was a little less complicated... It's harder and harder for new people to adjust." Wikipedia administrator Oliver Moran views "policy creep" as the major barrier, writing that "the loose collective running the site today, estimated to be 90 percent male, operates a crushing bureaucracy with an often abrasive atmosphere that deters newcomers who might increase participation in Wikipedia and broaden its coverage". According to Jemielniak, the sheer complexity of the rules and laws governing content and editor behavior has become excessive and creates a learning burden for new editors. In 2014 Jemielniak suggested actively rewriting, and abridging, the rules and laws to decrease their complexity and size.
Social stratification
Despite the perception that the Wikipedia process is democratic, "a small number of people are running the show", including administrators, bureaucrats, stewards, checkusers, mediators, arbitrators, and oversighters. In an article on Wikipedia conflicts in 2007, The Guardian discussed "a backlash among some editors, who say that blocking users compromises the supposedly open nature of the project and the imbalance of power between users and administrators may even be a reason some users choose to vandalize in the first place" based on the experiences of one editor who became a vandal after his edits were reverted and he was blocked for edit warring.
See also
Censorship of Wikipedia
Ideological bias on Wikipedia
Deletionism and inclusionism in Wikipedia
History of Wikipedia
List of Wikipedia controversies
Reliability of Wikipedia
Predictions of the end of Wikipedia
References
Further reading
Keen, Andrew. The Cult of the Amateur. Doubleday/Currency, 2007. (substantial criticisms of Wikipedia and other web 2.0 projects).
Rafaeli, Sheizaf & Ariel, Yaron (2008). "Online motivational factors: Incentives for participation and contribution in Wikipedia." In A. Barak (ed.), Psychological aspects of cyberspace: Theory, research, applications (pp.243–267). Cambridge, UK: Cambridge University Press.
External links
A Compendium of Wikipedia Criticism – Wikipediocracy
The Geographically Uneven Coverage of Wikipedia – Oxford Internet Institute – University of Oxford
Wikipedia
|
2505216
|
https://en.wikipedia.org/wiki/GIMPshop
|
GIMPshop
|
GIMPshop was a modification of the free and open source graphics program GNU Image Manipulation Program (GIMP), with the intent to replicate the feel of Adobe Photoshop. Its primary purpose has been to make users of Photoshop feel comfortable using GIMP. According to the developer, Scott Moschella:
History
GIMPshop was created by Scott Moschella of Next New Networks (formerly Attack of the Show!) as an unofficial fork of GIMP. He encountered resistance from GIMP's lead developers due to the methods he employed to implement his hacks. GIMPshop was originally developed for Mac OS X and is a Universal Binary. It has also been ported to Windows, Linux, and Solaris.
Features
GIMPshop shares GIMP's feature list, customisability, and availability on multiple platforms, while addressing some common criticisms regarding the program's interface: GIMPshop modifies the menu structure to more closely resemble Photoshop and adjusts the program's terminology to match Adobe's. Due to the interface changes, many tutorials for the popular Photoshop can be followed in GIMPshop without modification, and others may be adapted for GIMPshop users with minimal effort. All of GIMP's own plugins (filters, brushes, etc.) remain available.
Being based on GIMP, GIMPshop cannot generate CMYK output files by default. Users who need to generate color separations require additional software, since commercial printing requires CMYK, not RGB color channels. A workaround is available through the Separate+ plugin, which is not included in the base installation.
In the Windows version, GIMPshop uses a plugin called Deweirdifyer to combine the application's numerous windows in a similar manner to the MDI system used by most Windows graphics packages. This essentially adds a unifying background window that fully contains the entire GIMPshop UI. More compatibility with Photoshop can be achieved using a third-party add-on for GIMP that supports Photoshop plugins, called pspi, which runs on Microsoft Windows or Linux.
For Mac OS X, GIMPshop is compatible only with Panther (10.3.x) and Tiger (10.4.x). It requires the X11.app (based on the X Window System display protocol) to render the user interface. Newer versions of X11 are no longer compatible with GIMPshop.
Status
GIMPshop was based on the old GIMP 2.2.11, and is not current with the latest GIMP codebase. In order to maintain usability, some users have taken to manually updating GIMPshop's libraries themselves. Due to pending concerns over rights to the GIMPshop name, and a dispute with the individual who purchased the gimpshop.com domain, plans for an update are on hold. As explained by Moschella in 2010:
In a March 2014 discussion, Moschella states:
See also
GIMP
Adobe Photoshop
Comparison of raster graphics editors
Seashore
GimPhoto
References
External links
Free raster graphics editors
Raster graphics editors for Linux
Technical communication tools
Graphics software that uses GTK
|
9894301
|
https://en.wikipedia.org/wiki/List%20of%20computer%20simulation%20software
|
List of computer simulation software
|
The following is a list of notable computer simulation software.
Free or open-source
Advanced Simulation Library - open-source hardware accelerated multiphysics simulation software.
Algodoo - 2D physics simulator focused on the education market that is popular with younger users.
ASCEND - open-source equation-based modelling environment.
Cantera - chemical kinetics package.
Celestia - a 3D astronomy program.
CP2K - Open-source ab-initio molecular dynamics program.
DWSIM - an open-source CAPE-OPEN compliant chemical process simulator.
Elmer - an open-source multiphysical simulation software for Windows/Mac/Linux.
Facsimile - a free, open-source discrete-event simulation library.
FlightGear - a free, open-source atmospheric and orbital flight simulator with a flight dynamics engine (JSBSim) that is used in a 2015 NASA benchmark to judge new simulation code to space industry standards.
FreeFem++ - Free, open-source, multiphysics Finite Element Analysis (FEA) software.
Freemat - a free environment for rapid engineering, scientific prototyping and data processing using the same language as MATLAB and GNU Octave.
Gekko - simulation software in Python with machine learning and optimization
GNU Octave - an open-source mathematical modeling and simulation software very similar to using the same language as MATLAB and Freemat.
HASH - open-core multi-agent simulation software and package manager.
iMODELER - free web based qualitative modeling software and expert system for business, education, and public policy applications.
JModelica.org is a free and open source software platform based on the Modelica modeling language.
Mobility Testbed - an open-source multi-agent simulation testbed for transport coordination algorithms.
NEST - open-source software for spiking neural network models.
NetLogo - an open-source multi-agent simulation software.
ns-3 - an open-source network simulator.
OpenFOAM - open-source software used for computational fluid dynamics (or CFD).
OpenModelica - an open source modeling environment based on Modelica the open standard for modeling software.
Open Source Physics - an open-source Java software project for teaching and studying physics.
OpenSim - an open-source software system for biomechanical modeling.
Physics Abstraction Layer - an open-source physics simulation package.
Project Chrono - an open-source multi-physics simulation framework.
Repast - agent-based modeling and simulation platform with versions for individual workstations and high performance computer clusters.
SageMath - a system for algebra and geometry experimentation via Python.
Scilab - free open-source software for numerical computation and simulation similar to MATLAB/Simulink.
Simantics System Dynamics – used for modelling and simulating large hierarchical models with multidimensional variables created in a traditional way with stock and flow diagrams and causal loop diagrams.
SimPy - an open-source discrete-event simulation package based on Python.
Simulation of Urban MObility - an open-source traffic simulation package.
SOFA - an open-source framework for multi-physics simulation with an emphasis on medical simulation.
SU2 code - an open-source framework for computational fluid dynamics simulation and optimal shape design.
Step - an open-source two-dimensional physics simulation engine (KDE).
Tortuga - an open-source software framework for discrete-event simulation in Java.
UrbanSim – an open-source software to simulate land use, transportation and environmental planning.
Proprietary
Adaptive Simulations - cloud based and fully automated CFD simulations.
AGX Dynamics - realtime oriented multibody and multiphysics simulation engine.
20-sim - bond graph-based multi-domain simulation software.
Actran - finite element-based simulation software to analyze the acoustic behavior of mechanical systems and parts.
ADINA - engineering simulation software for structural, fluid, heat transfer, and multiphysics problems.
ACSL and acslX - an advanced continuous simulation language.
Simcenter Amesim - a platform to analyze multi-domain, intelligent systems and predict and optimize multi-disciplinary performance. Developed by Siemens PLM Software.
ANSYS - engineering simulation.
AnyLogic - a multi-method simulation modeling tool for business and science. Developed by The AnyLogic Company.
APMonitor - a tool for dynamic simulation, validation, and optimization of multi-domain systems with interfaces to Python and MATLAB.
Arena - a flowchart-based discrete event simulation software developed by Rockwell Automation
Automation Studio - a fluid power, electrical and control systems design and simulation software developed by Famic Technologies Inc.
Chemical WorkBench - a chemical kinetics simulation software tool developed by Kintech Lab.
CircuitLogix - an electronics simulation software developed by Logic Design Inc.
COMSOL Multiphysics - a predominantly finite element analysis, solver and simulation software package for various physics and engineering applications, especially coupled phenomena, or multi-physics.
CONSELF - browser based CFD and FEA simulation platform.
DX Studio - a suite of tools for simulation and visualization.
Dymola - modeling and simulation software based on the Modelica language.
DYNAMO - historically important language used for system dynamics modelling.
Ecolego - a simulation software tool for creating dynamic models and performing deterministic and probabilistic simulations.
EcosimPro - continuous and discrete modelling and simulation software.
Enterprise Architect - a tool for simulation of UML behavioral modeling, coupled with Win32 user interface interaction.
Enterprise Dynamics - a simulation software platform developed by INCONTROL Simulation Solutions.
ExtendSim - simulation software for discrete event, continuous, discrete rate and agent-based simulation.
FEATool Multiphysics - finite element physics and PDE simulation toolbox for MATLAB.
Flexsim - discrete event simulation software.
Flood Modeller - hydraulic simulation software, used to model potential flooding risk for engineering purposes.
GoldSim - simulation software for system dynamics and discrete event simulation, embedded in a Monte Carlo framework.
HyperWorks - multi-discipline simulation software
IDA ICE - equation-based (DAE) software for building performance simulation
iMODELER - system dynamics, process simulation, and qualitative modeling software and expert system for business, education, and public policy applications. Web based for collaborative modeling.
IES Virtual Environment (IESVE) - holistic building performance analysis and simulation software
Isaac dynamics - dynamic process simulation software for conventional and renewable power plants.
iThink - system dynamics and discrete event modeling software for business strategy, public policy, and education. Developed by isee systems.
JMAG - simulation software for electric device design and development.
Khimera - a chemical kinetics simulation software tool developed by Kintech Lab.
Lanner WITNESS - a discrete event simulation platform for modelling processes and experimentation.
Lanner L-SIM Server - Java-based simulation engine for simulating BPMN2.0 based process models.
MADYMO – automotive and transport safety software developed by Netherlands Organization for Applied Scientific Research
Maple - a general-purpose computer algebra system developed and sold commercially by Waterloo Maple Inc.
MapleSim - a multi-domain modeling and simulation tool developed by Waterloo Maple Inc.
MATLAB - a programming, modeling and simulation tool developed by MathWorks.
Mathematica - a computational software program based on symbolic mathematics, developed by Wolfram Research.
Micro Saint Sharp - a general purpose discrete event software tool using a graphical flowchart approach and on the C# language, developed by Alion Science and Technology.
ModelCenter - a framework for integration of third-party modeling and simulation tools/scripts, workflow automation, and multidisciplinary design analysis and optimization from Phoenix Integration.
NEi Nastran - software for engineering simulation of stress, dynamics, and heat transfer in structures.
NI Multisim - an electronic schematic capture and simulation program.
Plant Simulation - plant, line and process simulation and optimization software, developed by Siemens PLM Software.
PLECS - a tool for system-level simulations of electrical circuits. Developed by Plexim.
Project Team Builder - a project management simulator used for training and education.
ProLB - a computational fluid dynamics simulation software based on the Lattice Boltzmann method.
PTV Vissim - a microscopic and mesoscopic traffic flow simulation software.
PSF Lab - calculates the point spread function of an optical microscope under various imaging conditions based on a rigorous vectorial model.
RoboLogix - robotics simulation software developed by Logic Design Inc.
Ship Simulator - a vehicle simulation computer game by VSTEP which simulates maneuvering various ships in different environments.
Simcad Pro - Process simulation software with On-The-Fly model changes while the simulation is running. Lean analysis, VR, and physics. Developed by CreateASoft, Inc. Chicago USA
Simcenter STAR-CCM+ - a computational fluid dynamics based simulation software developed by Siemens Digital Industries Software.
SimEvents - a part of MathWorks which adds discrete event simulation to the MATLAB/Simulink environment.
SimScale - a web-based simulation platform, with CFD, FEA, and thermodynamics capabilities.
SIMUL8 - software for discrete event or process based simulation.
Simulations Plus - modeling and simulation software for pharmaceutical research
SimulationX - modeling and simulation software based on the Modelica language.
Simulink - a tool for block diagrams, electrical mechanical systems and machines from MathWorks.
SRM Engine Suite - engineering tool used for simulating fuels, combustion and exhaust gas emissions in IC engine applications.
STELLA - system dynamics and discrete event modeling software for business strategy, public policy, and education. Developed by isee systems.
TRNSYS - software for dynamic simulation of renewable energy systems, HVAC systems, building energy use and both passive and active solar systems.
UNIGINE - real-time 3D visualization SDK for simulation and training. Supports C++ and C# programming languages.
Unreal Engine - immersive virtual-reality training simulation software.
Vensim - system dynamics and continuous simulation software for business and public policy applications.
VisSim - system simulation and optional C-code generation of electrical, process, control, bio-medical, mechanical and UML State chart systems.
Vortex (software) - a complete simulation platform featuring a realtime physics engine for rigid body dynamics, an image generator, desktop tools (Editor and Player) and more. Also available as Vortex Studio Essentials, a limited free version.
Wolfram SystemModeler – modeling and simulation software based on the Modelica language.
Visual Components - a 3D factory simulation software for manufacturing applications including layout planning, production simulation, off-line programming and PLC verification.
VisualSim Architect – an electronic system-level software for modeling and simulation of electronic systems, embedded software and semiconductors.
VSim - a multiphysics simulation software tool designed to run computationally intensive electromagnetic, electrostatic, and plasma simulations.
zSpace – creates physical science applications
See also
Simulation language
References
Simulation software
Computer simulation software
|
768799
|
https://en.wikipedia.org/wiki/White-box%20testing
|
White-box testing
|
White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing) is a method of software testing that tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
White-box testing can be applied at the unit, integration and system levels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven, that is, driven exclusively by agreed specifications of how each component of software is required to behave (as in DO-178C and ISO 26262 processes) then white-box test techniques can accomplish assessment for unimplemented or missing requirements.
White-box test design techniques include the following code coverage criteria:
Control flow testing
Data flow testing
Branch testing
Statement coverage
Decision coverage
Modified condition/decision coverage
Prime path testing
Path testing
Overview
White-box testing is a method of testing the application at the level of the source code. These test cases are derived through the use of the design techniques mentioned above: control flow testing, data flow testing, branch testing, path testing, statement coverage and decision coverage as well as modified condition/decision coverage. White-box testing is the use of these techniques as guidelines to create an error-free environment by examining all code. These white-box testing techniques are the building blocks of white-box testing, whose essence is the careful testing of the application at the source code level to reduce hidden errors later on. These different techniques exercise every visible path of the source code to minimize errors and create an error-free environment. The whole point of white-box testing is the ability to know which line of the code is being executed and being able to identify what the correct output should be.
Levels
Unit testing. White-box testing is done during unit testing to ensure that the code is working as intended, before integration happens with previously tested code. White-box testing during unit testing potentially catches many defects early on and aids in addressing defects that happen later on after the code is integrated with the rest of the application and therefore reduces the impacts of errors later in development.
Integration testing. White-box testing at this level is written to test the interactions of interfaces with each other. The unit level testing made sure that each code was tested and working accordingly in an isolated environment and integration examines the correctness of the behaviour in an open environment through the use of white-box testing for any interactions of interfaces that are known to the programmer.
Regression testing. White-box testing during regression testing is the use of recycled white-box test cases at the unit and integration testing levels.
Basic procedure
White-box testing's basic procedures require the tester to have an in-depth knowledge of the source code being tested. The programmer must have a deep understanding of the application to know what kinds of test cases to create so that every visible path is exercised for testing. Once the source code is understood then the source code can be analyzed for test cases to be created. The following are the three basic steps that white-box testing takes in order to create test cases:
Input involves different types of requirements, functional specifications, detailed designing of documents, proper source code and security specifications. This is the preparation stage of white-box testing to lay out all of the basic information.
Processing involves performing risk analysis to guide whole testing process, proper test plan, execute test cases and communicate results. This is the phase of building test cases to make sure they thoroughly test the application the given results are recorded accordingly.
Output involves preparing final report that encompasses all of the above preparations and results.
Advantages
Side effects of having the knowledge of the source code is beneficial to thorough testing.
Optimization of code becomes easy as inconspicuous bottlenecks are exposed.
Gives the programmer introspection because developers carefully describe any new implementation.
Provides traceability of tests from the source, thereby allowing future changes to the source to be easily captured in the newly added or modified tests.
Easy to automate.
Provides clear, engineering-based rules for when to stop testing.
Disadvantages
White-box tests are written to test the details of a specific implementation. This means that the tests will fail when the implementation changes as the test is tightly coupled to the implementation. And so additional work has to be done to update the tests so they match the implementation again when the implementation is changed. On the other hand with black-box testing tests are independent of the implementation, and so they will still run successfully if the implementation changes but the output or side-effects of the implementation do not.
The code under test could be rewritten to implement the same functionality in a different way that invalidates the assumptions baked into the test. This could result in tests that fail unnecessarily or, in the worst case, tests that now give false positives and mask errors in the code. Because the white-box test never was written such that it tests the intended behavior of the code under test, but instead only such that the specific implementation does what it does.
White-box testing brings complexity to testing because the tester must have knowledge of the program, or the test team needs to have at least one very good programmer who can understand the program at the code level. White-box testing requires a programmer with a high level of knowledge due to the complexity of the level of testing that needs to be done.
On some occasions, it is not realistic to be able to test every single existing condition of the application and some conditions will be untested.
The tests focus on the software as it exists, and missing functionality may not be discovered.
Modern view
A more modern view is that the dichotomy between white-box testing and black-box testing has blurred and is becoming less relevant. Whereas "white-box" originally meant using the source code, and black-box meant using requirements, tests are now derived from many documents at various levels of abstraction. The real point is that tests are usually designed from an abstract structure such as the input space, a graph, or logical predicates, and the question is what level of abstraction we derive that abstract structure from. That can be the source code, requirements, input space descriptions, or one of dozens of types of design models. Therefore, the "white-box / black-box" distinction is less important and the terms are less relevant.
Hacking
In penetration testing, white-box testing refers to a method where a white hat hacker has full knowledge of the system being attacked. The goal of a white-box penetration test is to simulate a malicious insider who has knowledge of and possibly basic credentials for the target system.
See also
Black-box testing
Gray-box testing
White-box cryptography
References
External links
BCS SIGIST (British Computer Society Specialist Interest Group in Software Testing): http://www.testingstandards.co.uk/Component%20Testing.pdf Standard for Software Component Testing], Working Draft 3.4, 27. April 2001.
http://agile.csc.ncsu.edu/SEMaterials/WhiteBox.pdf has more information on control flow testing and data flow testing.
http://research.microsoft.com/en-us/projects/pex/ Pex – Automated white-box testing for .NET
Software testing
Hardware testing
|
45653634
|
https://en.wikipedia.org/wiki/Micro%20Bit
|
Micro Bit
|
The Micro Bit (also referred to as BBC Micro Bit, stylized as micro:bit) is an open source hardware ARM-based embedded system designed by the BBC for use in computer education in the United Kingdom. It was first announced on the launch of BBC's Make It Digital campaign on 12 March 2015 with the intent of delivering 1 million devices to pupils in the UK. The final device design and features were unveiled on 6 July 2015 whereas actual delivery of devices, initially planned for September 2015 to schools and October 2015 to general public, began on 10 February 2016.
The device is described as half the size of a credit card and has an ARM Cortex-M0 processor, accelerometer and magnetometer sensors, Bluetooth and USB connectivity, a display consisting of 25 LEDs, two programmable buttons, and can be powered by either USB or an external battery pack. The device inputs and outputs are through five ring connectors that form part of a larger 25-pin edge connector. In October 2020, a physically nearly identical v2 board was released that features a Cortex-M4F microcontroller, with more memory and other new features.
Hardware
v1
The physical board measures and, , included:
Nordic nRF51822 – 32-bit ARM Cortex-M0 microcontroller, flash memory, static RAM, Bluetooth low energy wireless networking. The ARM core has the capability to switch between or .
NXP/Freescale KL26Z – ARM Cortex-M0+ core microcontroller, that includes a full-speed USB 2.0 On-The-Go (OTG) controller, used as a communication interface between USB and main Nordic microcontroller. This device also performs the voltage regulation from the USB supply (4.5-5.25 V) down to the nominal 3.3 volts used by the rest of the PCB. When running on batteries this regulator is not used.
NXP/Freescale MMA8652 – 3-axis accelerometer sensor via I²C-bus.
NXP/Freescale MAG3110 – 3-axis magnetometer sensor via I²C-bus (to act as a compass and metal detector).
MicroUSB connector, battery connector, 25-pin edge connector.
Display consisting of 25 LEDs in a 5×5 array.
Three tactile pushbuttons (two for applications, one for reset).
I/O includes three ring connectors (plus one power one ground) which accept crocodile clips or 4 mm banana plugs as well as a 25-pin edge connector with two or three PWM outputs, six to 17 GPIO pins (depending on configuration), six analog inputs, serial I/O, SPI, and I²C. Unlike early prototypes, which had an integral battery, an external battery pack (AAA batteries) can be used to power the device as a standalone or wearable product. Health and safety concerns, as well as cost, were given as reasons for the removal of the button battery from early designs.
The available hardware design documentation consist of only the schematic and BOM distributed under the Creative Commons By Attribution license, no PCB layout is available. The compatible reference design by Micro:bit Educational Foundation, however, is fully documented.
v2
v2, released on 13 October 2020, includes:
Nordic nRF52833 – 32-bit ARM Cortex-M4 microcontroller, flash memory, static RAM, Bluetooth low energy wireless networking provided by Nordic S113 SoftDevice, integrated temperature sensor.
NXP/Freescale KL27Z – ARM Cortex-M0+ core microcontroller, preprogrammed as a full-speed USB 2.0 controller, used as a communication interface between USB and the CPU.
Either ST LSM303 or NXP FXOS8700 – 3-axis combined accelerometer and magnetometer sensor via I²C-bus.
Knowles MEMS microphone with a built-in LED indicator.
Jiangsu Huaneng MLT-8530 magnetic speaker.
MicroUSB connector, JST PH battery connector, 25-pin edge connector.
Display consisting of 25 LEDs in a 5×5 matrix.
Three tactile pushbuttons (two for applications, one for reset) and a touch sensor button.
In micro:bit v2, the reset button can be used to turn the board off by holding it for 3 seconds.
Software
There are three official code editors on the micro:bit foundation web site:
Microsoft MakeCode
MicroPython
Scratch
The Python programming experience on the Micro Bit is provided by MicroPython. Users are able to write Python scripts in the Micro Bit web editor which are then combined with the MicroPython firmware and uploaded to the device. Users can also access the MicroPython REPL running directly on the device via the USB serial connection, which allows them to interact directly with the Micro Bit's peripherals.
The Micro Bit was created using the ARM mbed development kits. The run-time system and programming interface utilize the mbed cloud compiler service to compile the user's code into a .UF2 file. The compiled code is then flashed onto the device using USB or Bluetooth connections. The device appears as a USB drive when connected to a computer, and code can be flashed dragging and dropping the .UF2 file.
Other editors for the BBC micro:bit include:
Mu, a Python editor
Espruino, a JavaScript interpreter
EduBlocks, a block editor for MicroPython
Other programming languages for the BBC micro:bit include:
Free Pascal (instructions)
Simulink in Matlab (Simulink Coder Support Package for BBC micro:bit Board) signal logging, parameter tuning, code development from the Simulink block editor.
C++ (instructions)
Forth (instructions)
Lisp (instructions)
Rust (instructions)
Ada (instructions)
Swift (instructions)
BASIC (instructions)
Operating systems which can be built for the BBC micro:bit:
Zephyr - the Zephyr lightweight OS comes with the required parameters file to be able to run it on this board.
History
Development
The Micro Bit was designed to encourage children to get actively involved in writing software for computers and building new things, rather than being consumers of media. It was designed to work alongside other systems, such as the Raspberry Pi, building on BBC's legacy with the BBC Micro for computing in education. The BBC planned to give away the computer free to every year 7 (11- and 12-year old) child in Britain starting from October 2015 (around 1 million devices). In advance of the roll-out an online simulator was made available to help educators prepare, and some teachers were to receive the device in September 2015. The device was planned to be on general sale by the end of 2015. However, problems delayed the launch until 22 March 2016.
The BBC had a difficult decision to choose which school year group would be the first to receive the free Micro Bits, and the BBC's head of learning said that "The reason we plumped for year seven [rather than year five] is it had more impact with that age group … they were more interested in using it outside the classroom".
Planning for the project began in 2012 as part of the BBC Computer Literacy Programme, and by the time of the launch in July 2015 the BBC had taken on board 29 partners to help with the manufacturing, design, and distribution of the device. The BBC has said that the majority of the development costs were borne by the project partners.
Partnerships
The development of the Micro Bit is a product of a number of partners working with the BBC:
Microsoft – contributed its software expertise and customised the TouchDevelop platform to work with the device. It hosts the projects and code for users of the device. It has also developed the teacher training materials for the device.
Lancaster University – developing the device runtime.
Farnell element14 – overseeing the manufacture of the device.
Nordic Semiconductor – supplied the CPU for the device.
NXP Semiconductors – supplied the sensors and USB controller.
ARM Holdings – provided mbed hardware, development kits and compiler services.
Technology Will Save Us – designing the physical appearance of the device.
Barclays – supported product delivery and outreach activities.
Samsung – developed an Android app and helped connect the device to phones and tablets.
The Wellcome Trust – provided learning opportunities for teachers and schools.
ScienceScope – developing an iOS app and distributing the device to schools.
Python Software Foundation – worked to bring MicroPython to the device, created native and web-based beginner-friendly Python code editors, produced numerous educational resources and organised developer-led workshops for teachers.
Bluetooth SIG – Developed the custom Bluetooth LE profile.
Creative Digital Solutions – developed teaching materials, workshops and outreach activities.
Cisco – provided staff and resources to STEMNET to aid with the national rollout.
Code Club – Created a series of coding resources aimed at children ages 9 to 11 and delivered via volunteer-run coding clubs.
STEMNET – Provided STEM ambassadors to support schools and teachers and to liaise with third parties such as Bloodhound SSC and Cisco.
Kitronik – Produced and gave away 5,500 e-textile kits for the BBC micro:bit to D&T (Design & Technology) teachers across the UK. Designed hardware such as a Motor Driver board to allow the BBC micro:bit to control devices such as motors and servos.
Tangent Design – Created the brand identity for the BBC micro:bit and developed the website.
A prototype device and software stack created by BBC R&D, demonstrated in the initial announcement, was used to test the proposition in schools, and to provide a reference specification for the partnership to build upon.
Microbit Educational Foundation
After a successful roll-out of the micro:bit across the UK, the BBC handed over the future of the BBC micro:bit, and adoption in other parts of the world, to the newly formed, not-for-profit, Microbit Education Foundation. The announcement was made on Oct 18, 2016 to a small group of journalists and educators at Savoy Place in London, that included a review of the past year and their plans for the future. The transition from the BBC to the micro:bit Education Foundation moved the official home of the micro:bit from microbit.co.uk to microbit.org.
The BBC licensed the hardware technology as open source and allows it to be manufactured around the world for use in education. The foundation oversees this.
On Jan 2, 2018 it was announced that Gareth Stockdale from BBC Learning would succeed Zach Shelby as CEO of the Microbit Educational Foundation.
Microbit Reference Design
The foundation is also providing a fully documented reference design of a device different from the marketed, but software compatible, with the intention of easing the independent development and manufacturing of micro:bit derived devices and products. The reference design is open source hardware, but unlike the marketed device employing a CC BY 4.0 license it is distributed under the terms of the Solderpad Hardware Licence, Version 0.51. The available design documentation for the reference design includes both schematic and circuit board layout in several EDA suite formats.
micro:bit v2
On 13 October 2020, the Micro:bit Educational Foundation has announced a revised version of micro:bit. Available for the same price as the original micro:bit and sharing its general design, micro:bit v2 includes Nordic nRF52833 CPU (ARM Cortex-M4, 64 MHz, 128 KB RAM, 512 KB flash), and additionally a microphone, a speaker, a touch sensor, and power saving mode.
See also
Arduino
List of Arduino boards and compatible systems
Raspberry Pi
BBC Micro
Calliope mini
micro:bit universal hex format
References
Further reading
"Beginning Data Science, IoT, and AI on Single Board Computers: Core Skills and Real-World Application with the BBC micro:bit and XinaBox 1st ed. Edition" Authors: Pradeeka Seneviratne, Philip Meitiner (2020)
"BBC micro:bit Recipes: Learn Programming with Microsoft MakeCode Blocks" Author: Pradeeka Seneviratne (2019)
"Beginning BBC micro:bit:A Practical Introduction to micro:bit Development" Author: Pradeeka Seneviratne (2018); Chinese translation by Jason Liu (2019)
"Robótica Educativa - 50 Proyectos con micro:bit" Author: Ernesto Martínez de Carvajal Hedrich (2018).
"The Official BBC micro:bit User Guide" Author: Gareth Halfacree (2017)
"micro: bit in Wonderland: Coding & Craft with the BBC micro:bit" Authors: Tracy Gardner and Elbrie de Kock (2018).
"Getting Started with the BBC Micro:Bit" Author: Mike Tooley (2017)
"Micro:Bit – A Quick Start Guide for Teachers" Author: Ray Chambers (2015)
External links
BBC micro:bit technical specifications
BBC micro:bit edge pinout
hands-on with BBC's Micro Bit (original prototype)
BBC micro:bit at Microsoft Research
BBC micro:bit repository on GitHub
BBC computer literacy projects
Single-board computers
Products introduced in 2016
|
27791982
|
https://en.wikipedia.org/wiki/PTC%20Integrity
|
PTC Integrity
|
PTC Integrity Lifecycle Manager (formerly MKS Integrity) is a software system lifecycle management (SSLM) and application lifecycle management (ALM) platform developed by MKS Inc. and was first released in 2001. The software is client/server, with both desktop (java/swing) and web client interfaces. It provides software development organizations with a collaborative environment in which they can manage the end-to-end processes of development, from requirements management, engineering change management, revision control, and build management to test management and software deployment as well as associated reports & metrics.
Overview
MKS Integrity is now a PTC product since the acquisition of MKS Inc. which was completed on May 31, 2011 by PTC.
PTC Integrity Lifecycle Manager (Integrity LM or ILM) allows software development teams to track all aspects of their work, including work items, source control, reporting, and build management, in a single product.
The product consists of two components – Integrity Configuration Management and Integrity Workflow & Documents.
The Configuration Management part of PTC ILM is used to handle source code versions, branches, etc. It is based on client-server architecture. The Java client doesn't store any management data on local system, therefore any task performed on source files requires network connection.
This means that, unlike distributed systems, this system requires reliable network connection, enough network bandwidth and enough processing power on server side. The other component (Workflow & Documents) consists of an Issue tracking system, as well as a Requirement- and Test Management solution.
One of the strengths compared to other similar solutions is PTC Integrity's flexibility in terms of workflows, fields, presentation layout, validation and automation capabilities. PTC Integrity Lifecycle Manager is based on Java, and uses a JavaScript extension for the reporting. Any interaction can be performed online, in the command-line interface or utilizing the server or client Java API.
PTC Integrity Lifecycle Manager is built around a single repository. This single-repository solution supports the three pillars of lifecycle management — traceability, process automation, and reporting and analytics and some companies may see additional value in this approach.
Integration of PTC Integrity Lifecycle Manager with IDEs and other development tools is – out of the box – limited to few products.
Supported IDEs include Eclipse and Visual Studio. Also supported are IBM i and Apache Maven.
PTC Integrity Lifecycle Manager Solutions
When installing PTC Integrity LM, you will get a full set of functionality in the Configuration Management part. To allow you to work with Workflow & Documents, you can install a solution component. The following solutions are available (in the release order):
ALM Solution
Medical Solution
Agile Solution
Systems Engineering Solution (SysEng)
ALM and Agile are single solutions. The SysEng solution consists of both components integrated, and additionally also Risk Management (3 more document types).
A solution provides the following elements:
Item and Document Types
States and Workflows
Reports
Charts
Dashboards
Every solution component can be tailored or enhanced to fulfill your individual process requirements.
PTC Integrity as Product Group
In 2015, PTC defined a software product group using the name "PTC Integrity". This group contains also former Atego products, such as
PTC Integrity Lifecycle Manager (former MKS Integrity)
PTC Integrity Modeler (Atego Modeler),
PTC Integrity Process Director (Atego Process Director), and
PTC Integrity Requirements Connector (Atego Requirements Synchronizer™)
History
PTC Integrity Lifecycle Manager was previously known under different brands, including MKS Source, MKS Integrity Manager, Implementer (for IBM i) and others. These were consolidated under a single brand, with the release of MKS Integrity 2007 in July 2007, which was acquired by PTC and finally renamed to PTC Integrity in 2011. PTC retired the Integrity brand and rebranded Integrity to Windchill starting in July 2019.
References
External links
PTC Integrity
Integrity Blog
PTC User Community Portal
Version control systems
Proprietary version control systems
Software project management
Agile software development
|
849531
|
https://en.wikipedia.org/wiki/TCP%20offload%20engine
|
TCP offload engine
|
TCP offload engine (TOE) is a technology used in some network interface cards (NIC) to offload processing of the entire TCP/IP stack to the network controller. It is primarily used with high-speed network interfaces, such as gigabit Ethernet and 10 Gigabit Ethernet, where processing overhead of the network stack becomes significant.
TOEs are often used as a way to reduce the overhead associated with Internet Protocol (IP) storage protocols such as iSCSI and Network File System (NFS).
Purpose
Originally TCP was designed for unreliable low speed networks (such as early dial-up modems) but with the growth of the Internet in terms of backbone transmission speeds (using Optical Carrier, Gigabit Ethernet and 10 Gigabit Ethernet links) and faster and more reliable access mechanisms (such as DSL and cable modems) it is frequently used in data centers and desktop PC environments at speeds of over 1 Gigabit per second. At these speeds the TCP software implementations on host systems require significant computing power. In the early 2000s, full-duplex gigabit TCP communication could consume more than 80% of a 2.4 GHz Pentium 4 processor, resulting in small or no processing resources left for the applications to run on the system.
TCP is a connection-oriented protocol which adds complexity and processing overhead. These aspects include:
Connection establishment using the "3-way handshake" (SYNchronize; SYNchronize-ACKnowledge; ACKnowledge).
Acknowledgment of packets as they are received by the far end, adding to the message flow between the endpoints and thus the protocol load.
Checksum and sequence number calculations - again a burden on a general purpose CPU to perform.
Sliding window calculations for packet acknowledgement and congestion control.
Connection termination.
Moving some or all of these functions to dedicated hardware, a TCP offload engine, frees the system's main CPU for other tasks. As of 2012, very few consumer network interface cards support TOE.
Freed-up CPU cycles
A generally accepted rule of thumb is that 1 Hertz of CPU processing is required to send or receive of TCP/IP. For example, 5 Gbit/s (625 MB/s) of network traffic requires 5 GHz of CPU processing. This implies that 2 entire cores of a 2.5 GHz multi-core processor will be required to handle the TCP/IP processing associated with 5 Gbit/s of TCP/IP traffic. Since Ethernet (10GE in this example) is bidirectional it is possible to send and receive 10 Gbit/s (for an aggregate throughput of 20 Gbit/s). Using the 1 Hz/(bit/s) rule this equates to eight 2.5 GHz cores.
Many of the CPU cycles used for TCP/IP processing are freed-up by TCP/IP offload and may be used by the CPU (usually a server CPU) to perform other tasks such as file system processing (in a file server) or indexing (in a backup media server). In other words, a server with TCP/IP offload can do more server work than a server without TCP/IP offload NICs.
Reduction of PCI traffic
In addition to the protocol overhead that TOE can address, it can also address some architectural issues that affect a large percentage of host based (server and PC) endpoints.
Many older end point hosts are PCI bus based, which provides a standard interface for the addition of certain peripherals such as Network Interfaces to Servers and PCs.
PCI is inefficient for transferring small bursts of data from main memory, across the PCI bus to the network interface ICs, but its efficiency improves as the data burst size increases. Within the TCP protocol, a large number of small packets are created (e.g. acknowledgements) and as these are typically generated on the host CPU and transmitted across the PCI bus and out the network physical interface, this impacts the host computer IO throughput.
A TOE solution, located on the network interface, is located on the other side of the PCI bus from the CPU host so it can address this I/O efficiency issue, as the data to be sent across the TCP connection can be sent to the TOE from the CPU across the PCI bus using large data burst sizes with none of the smaller TCP packets having to traverse the PCI bus.
History
One of the first patents in this technology, for UDP offload, was issued to Auspex Systems in early 1990. Auspex founder Larry Boucher and a number of Auspex engineers went on to found Alacritech in 1997 with the idea of extending the concept of network stack offload to TCP and implementing it in custom silicon. They introduced the first parallel-stack full offload network card in early 1999; the company's SLIC (Session Layer Interface Card) was the predecessor to its current TOE offerings. Alacritech holds a number of patents in the area of TCP/IP offload.
By 2002, as the emergence of TCP-based storage such as iSCSI spurred interest, it was said that "At least a dozen newcomers, most founded toward the end of the dot-com bubble, are chasing the opportunity for merchant semiconductor accelerators for storage protocols and applications, vying with half a dozen entrenched vendors and in-house ASIC designs."
In 2005 Microsoft licensed Alacritech's patent base and along with Alacritech created the partial TCP offload architecture that has become known as TCP chimney offload. TCP chimney offload centers on the Alacritech "Communication Block Passing Patent". At the same time, Broadcom also obtained a license to build TCP chimney offload chips.
Types of TCP/IP offload
Instead of replacing the TCP stack with a TOE entirely, there are alternative techniques to offload some operations in co-operation with the operating system's TCP stack. TCP checksum offload and large segment offload are supported by the majority of today's Ethernet NICs. Newer techniques like large receive offload and TCP acknowledgment offload are already implemented in some high-end Ethernet hardware, but are effective even when implemented purely in software.
Parallel-stack full offload
Parallel-stack full offload gets its name from the concept of two parallel TCP/IP Stacks. The first is the main host stack which is included with the host OS. The second or "parallel stack" is connected between the Application Layer and the Transport Layer (TCP) using a "vampire tap". The vampire tap intercepts TCP connection requests by applications and is responsible for TCP connection management as well as TCP data transfer. Many of the criticisms in the following section relate to this type of TCP offload.
HBA full offload
HBA (Host Bus Adapter) full offload is found in iSCSI host adapters which present themselves as disk controllers to the host system while connecting (via TCP/IP) to an iSCSI storage device. This type of TCP offload not only offloads TCP/IP processing but it also offloads the iSCSI initiator function. Because the HBA appears to the host as a disk controller, it can only be used with iSCSI devices and is not appropriate for general TCP/IP offload.
TCP chimney partial offload
TCP chimney offload addresses the major security criticism of parallel-stack full offload. In partial offload, the main system stack controls all connections to the host. After a connection has been established between the local host (usually a server) and a foreign host (usually a client) the connection and its state are passed to the TCP offload engine. The heavy lifting of data transmit and receive is handled by the offload device. Almost all TCP offload engines use some type of TCP/IP hardware implementation to perform the data transfer without host CPU intervention. When the connection is closed, the connection state is returned from the offload engine to the main system stack. Maintaining control of TCP connections allows the main system stack to implement and control connection security.
Large receive offload
Large receive offload (LRO) is a technique for increasing inbound throughput of high-bandwidth network connections by reducing central processing unit (CPU) overhead. It works by aggregating multiple incoming packets from a single stream into a larger buffer before they are passed higher up the networking stack, thus reducing the number of packets that have to be processed. Linux implementations generally use LRO in conjunction with the New API (NAPI) to also reduce the number of interrupts.
According to benchmarks, even implementing this technique entirely in software can increase network performance significantly. , the Linux kernel supports LRO for TCP in software only. FreeBSD 8 supports LRO in hardware on adapters that support it.
LRO should not operate on machines acting as routers, as it breaks the end-to-end principle and can significantly impact performance.
Generic receive offload
Generic receive offload (GRO) implements a generalised LRO in software that isn't restricted to TCP/IPv4 or have the issues created by LRO.
Large send offload
In computer networking, large send offload (LSO) is a technique for increasing egress throughput of high-bandwidth network connections by reducing CPU overhead. It works by passing a multipacket buffer to the network interface card (NIC). The NIC then splits this buffer into separate packets. The technique is also called TCP segmentation offload (TSO) or generic segmentation offload (GSO) when applied to TCP. LSO and LRO are independent and use of one does not require the use of the other.
When a system needs to send large chunks of data out over a computer network, the chunks first need breaking down into smaller segments that can pass through all the network elements like routers and switches between the source and destination computers. This process is referred to as segmentation. Often the TCP protocol in the host computer performs this segmentation. Offloading this work to the NIC is called TCP segmentation offload (TSO).
For example, a unit of 64 KiB (65,536 bytes) of data is usually segmented to 45 segments of 1460 bytes each before it is sent through the NIC and over the network. With some intelligence in the NIC, the host CPU can hand over the 64 KB of data to the NIC in a single transmit-request, the NIC can break that data down into smaller segments of 1460 bytes, add the TCP, IP, and data link layer protocol headers — according to a template provided by the host's TCP/IP stack — to each segment, and send the resulting frames over the network. This significantly reduces the work done by the CPU. many new NICs on the market support TSO.
Some network cards implement TSO generically enough that it can be used for offloading fragmentation of other transport layer protocols, or for doing IP fragmentation for protocols that don't support fragmentation by themselves, such as UDP.
Support in Linux
Unlike other operating systems, such as FreeBSD, the Linux kernel does not include support for TOE (not to be confused with other types of network offload). While there are patches from the hardware manufacturers such as Chelsio or Qlogic that add TOE support, the Linux kernel developers are opposed to this technology for several reasons:
Security – because TOE is implemented in hardware, patches must be applied to the TOE firmware, instead of just software, to address any security vulnerabilities found in a particular TOE implementation. This is further compounded by the newness and vendor-specificity of this hardware, as compared to a well tested TCP/IP stack as is found in an operating system that does not use TOE.
Limitations of hardware – because connections are buffered and processed on the TOE chip, resource starvation can more easily occur as compared to the generous CPU and memory available to the operating system.
Complexity – TOE breaks the assumption that kernels make about having access to all resources at all times – details such as memory used by open connections are not available with TOE. TOE also requires very large changes to a networking stack in order to be supported properly, and even when that is done, features like quality of service and packet filtering might not work.
Proprietary – TOE is implemented differently by each hardware vendor. This means more code must be rewritten to deal with the various TOE implementations, at a cost of the aforementioned complexity and, possibly, security. Furthermore, TOE firmware cannot be easily modified since it is closed-source.
Obsolescence – Each TOE NIC has a limited lifetime of usefulness, because system hardware rapidly catches up to TOE performance levels, and eventually exceeds TOE performance levels.
Suppliers
Much of the current work on TOE technology is by manufacturers of 10 Gigabit Ethernet interface cards, such as Broadcom, Chelsio Communications, Emulex, Mellanox Technologies, QLogic.
See also
Scalable Networking Pack
I/O Acceleration Technology
References
External links
Article: TCP Offload to the Rescue by Andy Currid at ACM Queue
Patent Application 20040042487
Windows Network Task Offload
GSO in Linux
Brief Description on LSO in Linux
Case Studies of Performance issues with LSO and Traffic Shaping (Linux)
FreeBSD 7.0 new features, brief discussion on TSO support
Networking hardware
Network acceleration
Offload Engine
|
3049527
|
https://en.wikipedia.org/wiki/Keychain%20%28software%29
|
Keychain (software)
|
Keychain is the password management system in macOS, developed by Apple. It was introduced with Mac OS 8.6, and has been included in all subsequent versions of the operating system, now known as macOS. A Keychain can contain various types of data: passwords (for websites, FTP servers, SSH accounts, network shares, wireless networks, groupware applications, encrypted disk images), private keys, certificates, and secure notes.
Storage and access
In macOS, keychain files are stored in ~/Library/Keychains/ (and subdirectories), /Library/Keychains/, and /Network/Library/Keychains/, and the Keychain Access GUI application is located in the Utilities folder in the Applications folder. It is free, open source software released under the terms of the APSL-2.0. The command line equivalent of Keychain Access is /usr/bin/security.
The keychain database is encrypted per-table and per-row with AES-256-GCM. The time which each credential is decrypted, how long it will remain decrypted, and whether the encrypted credential will be synced to iCloud varies depending on the type of data stored, and is documented on the Apple support website.
Locking and unlocking
The default keychain file is the login keychain, typically unlocked on login by the user's login password, although the password for this keychain can instead be different from a user's login password, adding security at the expense of some convenience. The Keychain Access application does not permit setting an empty password on a keychain.
The keychain may be set to be automatically "locked" if the computer has been idle for a time, and can be locked manually from the Keychain Access application. When locked, the password has to be re-entered next time the keychain is accessed, to unlock it. Overwriting the file in ~/Library/Keychains/ with a new one (e.g. as part of a restore operation) also causes the keychain to lock and a password is required at next access.
Password synchronization
If the login keychain is protected by the login password, then the keychain's password will be changed whenever the login password is changed from within a logged in session on macOS. On a shared Mac/non-Mac network, it is possible for the login keychain's password to lose synchronization if the user's login password is changed from a non-Mac system. Also, if the password is changed from a directory service like Active Directory or Open Directory, or if the password is changed from another admin account e.g. using the System Preferences. Some network administrators react to this by deleting the keychain file on logout, so that a new one will be created next time the user logs in. This means keychain passwords will not be remembered from one session to the next, even if the login password has not been changed. If this happens, the user can restore the keychain file in ~/Library/Keychains/ from a backup, but doing so will lock the keychain which will then need to be unlocked at next use.
History
Keychains were initially developed for Apple's e-mail system, PowerTalk, in the early 1990s. Among its many features, PowerTalk used plug-ins that allowed mail to be retrieved from a wide variety of mail servers and online services. The keychain concept naturally "fell out" of this code, and was used in PowerTalk to manage all of a user's various login credentials for the various e-mail systems PowerTalk could connect to.
The passwords were not easily retrievable due to the encryption, yet the simplicity of the interface allowed the user to select a different password for every system without fear of forgetting them, as a single password would open the file and return them all. At the time, implementations of this concept were not available on other platforms. Keychain was one of the few parts of PowerTalk that was obviously useful "on its own", which suggested it should be promoted to become a part of the basic Mac OS. But due to internal politics, it was kept inside the PowerTalk system and, therefore, available to very few Mac users.
It was not until the return of Steve Jobs in 1997 that Keychain concept was revived from the now-discontinued PowerTalk. By this point in time the concept was no longer so unusual, but it was still rare to see a keychain system that was not associated with a particular piece of application software, typically a web browser. Keychain was later made a standard part of Mac OS 9, and was included in Mac OS X in the first commercial versions.
Security
Keychain is distributed with both iOS and macOS. The iOS version is simpler because applications that run on mobile devices typically need only very basic Keychain features. For example, features such as ACLs (Access Control Lists) and sharing Keychain items between different apps are not present. Thus, iOS Keychain items are only accessible to the app that created them.
As Mac users’ default storage for sensitive information, Keychain is a prime target for security attacks.
In 2019, 18-year-old German security researcher Linus Henze demonstrated his hack, dubbed KeySteal, that grabs passwords from the Keychain. Initially he withheld details of the hack, demanding Apple set up a bug bounty for macOS. Apple had however not done so when Henze subsequently revealed the hack. It utilized Safari's access to security services, disguised as a utility in macOS that enables IT administrators to manipulate keychains.
See also
List of password managers
Password manager
Cryptography
References
MacOS security technology
PIM-software for MacOS
Free password managers
Software using the Apple Public Source License
|
206586
|
https://en.wikipedia.org/wiki/Technological%20convergence
|
Technological convergence
|
Technological convergence, also known as digital convergence, is the tendency for technologies that were originally unrelated to become more closely integrated and even unified as they develop and advance. For example, watches, telephones, television, computers, and social media platforms began as separate and mostly unrelated technologies, but have converged in many ways into interrelated parts of a telecommunication and media industry, sharing common elements of digital electronics and software.
Definitions
"Convergence is a deep integration of knowledge, tools, and all relevant activities of human activity for a common goal, to allow society to answer new questions to change the respective physical or social ecosystem. Such changes in the respective ecosystem open new trends, pathways, and opportunities in the following divergent phase of the process" (Roco 2002, Bainbridge and Roco 2016).
Siddhartha Menon defines convergence, in his Policy initiative Dilemmas on Media Covergence: A Cross National Perspective, as integration and digitalization. Integration, here, is defined as "a process of transformation measure by the degree to which diverse media such as phone, data broadcast and information technology infrastructures are combined into a single seamless all purpose network architecture platform". Digitalization is not so much defined by its physical infrastructure, but by the content or the medium. Jan van Dijk suggests that "digitalization means breaking down signals into bytes consisting of ones and zeros". Convergence is defined by Blackman, 1998, as a trend in the evolution of technology services and industry structures. Convergence is later defined more specifically as the coming together of telecommunications, computing and broadcasting into a single digital bit-stream. Mueller stands against the statement that convergence is really a takeover of all forms of media by one technology: digital computers.
Media technological convergence is the tendency that as technology changes, different technological system sometimes evolve toward performing similar tasks. Digital convergence refers to the convergence of four industries into one conglomerate, ITTCE (Information Technologies, Telecommunication, Consumer Electronics, and Entertainment). Previously separate technologies such as voice (and telephony features), data (and productivity applications), and video can now share resources and interact with each other synergistically. Telecommunications convergence (also called "network convergence") describes emerging telecommunications technologies, and network architecture used to migrate multiple communications services into a single network. Specifically this involves the converging of previously distinct media such as telephony and data communications into common interfaces on single devices, such as most smart phones can make phone calls and search the web.
Media convergence is the interlinking of computing and other information technologies, media content, media companies and communication networks that have arisen as the result of the evolution and popularization of the Internet as well as the activities, products and services that have emerged in the digital media space. Closely linked to the multilevel process of media convergence are also several developments in different areas of the media and communication sector which are also summarized under the term of media deconvergence. Many experts view this as simply being the tip of the iceberg, as all facets of institutional activity and social life such as business, government, art, journalism, health, and education are increasingly being carried out in these digital media spaces across a growing network of information and communication technology devices. Also included in this topic is the basis of computer networks, wherein many different operating systems are able to communicate via different protocols.
Convergent services, such as VoIP, IPTV, Smart TV, and others, tend to replace the older technologies and thus can disrupt markets. IP-based convergence is inevitable and will result in new service and new demand in the market. When the old technology converges into the public-owned common, IP based services become access-independent or less dependent. The old service is access-dependent.
Elements of technology convergence
There are 5 elements of technological convergence, which are:
Technology, It is a common for technologies that are viewed as very different to develop similar features with time that blur differences. In 1995, a television and a mobile phone were completely different devices. In recent years, they may have similar features such as the ability to connect to wifi, play rich internet-based media and run apps. People may use either their television or phone to play a game or communicate with relatives, using the same software.
Media & content, a television and internet services were once viewed as separate but have begun to converge. It is likely that music, movies, video games and informational content will eventually converge to the point that they are no longer distinct formats. For example, future music may always come with an interactive music video that resembles a game.
Services applications, in the late 1990s, there was a large difference between business and consumer software and services. With time, this line has blurred. Technology tends to move from a large number of highly specific tools towards a small set of flexible tools with broad applications.
Robots & machines, it is increasingly common for machines such as vehicles or appliances to have semi-autonomous features that technically make them robots.
Virtual reality/Augmented reality, can be viewed as the convergence of real life with digital entities such as simulations, games, and information environments.
History of media technological convergence
Communication networks were designed to carry different types of information independently. The older media, such as television and radio, are broadcasting networks with passive audiences. Convergence of telecommunication technology permits the manipulation of all forms of information, voice, data, and video. Telecommunication has changed from a world of scarcity to one of seemingly limitless capacity. Consequently, the possibility of audience interactivity morphs the passive audience into an engaged audience. The historical roots of convergence can be traced back to the emergence of mobile telephony and the Internet, although the term properly applies only from the point in marketing history when fixed and mobile telephony began to be offered by operators as joined products. Fixed and mobile operators were, for most of the 1990s, independent companies. Even when the same organization marketed both products, these were sold and serviced independently.
In the 1990s an implicit and often explicit assumption was that new media was going to replace the old media and Internet was going to replace broadcasting. In Nicholas Negroponte's Being Digital, Negroponte predicts the collapse of broadcast networks in favor of an era of narrow-casting. He also suggests that no government regulation can shatter the media conglomerate. "The monolithic empires of mass media are dissolving into an array of cottage industries... Media barons of today will be grasping to hold onto their centralized empires tomorrow.... The combined forces of technology and human nature will ultimately take a stronger hand in plurality than any laws Congress can invent." The new media companies claimed that the old media would be absorbed fully and completely into the orbit of the emerging technologies. George Gilder dismisses such claims saying, "The computer industry is converging with the television industry in the same sense that the automobile converged with the horse, the TV converged with the nickelodeon, the word-processing program converged with the typewriter, the CAD program converged with the drafting board, and digital desktop publishing converged with the Linotype machine and the letterpress." Gilder believes that computers had come not to transform mass culture but to destroy it.
Media companies put Media Convergence back to their agenda, after the dot-com bubble burst. Erstwhile Knight Ridder promulgated concept of portable magazines, newspaper, and books in 1994: "Within news corporations it became increasingly obvious that an editorial model based on mere replication in the internet of contents that had previously been written for print newspapers, radio, or television was no longer sufficient." The rise of digital communication in the late 20th century has made it possible for media organizations (or individuals) to deliver text, audio, and video material over the same wired, wireless, or fiber-optic connections. At the same time, it inspired some media organizations to explore multimedia delivery of information. This digital convergence of news media, in particular, was called "Mediamorphosis" by researcher Roger Fidler , in his 1997 book by that name. Today, we are surrounded by a multi-level convergent media world where all modes of communication and information are continually reforming to adapt to the enduring demands of technologies, "changing the way we create, consume, learn and interact with each other".
Converging technological fields
NBIC, an acronym for Nanotechnology, Biotechnology, Information technology and Cognitive science, was, in 2014, the most popular term for converging technologies. It was introduced into public discourse through the publication of Converging Technologies for Improving Human Performance, a report sponsored in part by the U.S. National Science Foundation. Various other acronyms have been offered for the same concept such as GNR (Genetics, Nanotechnology and Robotics) (Bill Joy, 2000, Why the future doesn't need us). Journalist Joel Garreau in Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human uses "GRIN", for Genetic, Robotic, Information, and Nano processes, while science journalist Douglas Mulhall in Our Molecular Future: How Nanotechnology, Robotics, Genetics and Artificial Intelligence Will Transform Our World uses "GRAIN", for Genetics, Robotics, Artificial Intelligence, and Nanotechnology. Another acronym coined by the appropriate technology organization ETC Group is "BANG" for "Bits, Atoms, Neurons, Genes".
Converging science and technology fields
A comprehensive term used by Roco, Bainbridge, Tonn and Whitesides is Convergence of Knowledge, Technology and Society (2013). Bainbridge and Roco edited and co-authored the Springer reference Handbook of Science and Technology Convergence (2016) defining the concept of convergence in various science, technology and medical fields. Roco published Principles and Methods that Facilitate Convergence (2015).
Examples of technology implications
Convergent solutions include both fixed-line and mobile technologies. Recent examples of new, convergent services include:
Using the Internet for voice and video telephony
Video on demand
Fixed-mobile convergence
Mobile-to-mobile convergence
Location-based services
Integrated products and bundles
Convergent technologies can integrate the fixed-line with mobile to deliver convergent solutions. Convergent technologies include:
IP Multimedia Subsystem
Session Initiation Protocol
IPTV
Voice over IP
Voice call continuity
Digital video broadcasting – handheld
Convergence in appliances
Some media observers expect that we will eventually access all media content through one device, or "black box". As such, media business practice has been to identify the next "black box" to invest in and provide media for. This has caused a number of problems. Firstly, as "black boxes" are invented and abandoned, the individual is left with numerous devices that can perform the same task, rather than one dedicated for each task. For example, one may own both a computer and a video games console, subsequently owning two DVD players. This is contrary to the streamlined goal of the "black box" theory, and instead creates clutter. Secondly, technological convergence tends to be experimental in nature. This has led to consumers owning technologies with additional functions that are harder, if not impractical, to use rather than one specific device. Many people would only watch the TV for the duration of the meal's cooking time, or whilst in the kitchen, but would not use the microwave as the household TV. These examples show that in many cases technological convergence is unnecessary or unneeded.
Furthermore, although consumers primarily use a specialized media device for their needs, other "black box" devices that perform the same task can be used to suit their current situation. As a 2002 Cheskin Research report explained: "...Your email needs and expectations are different whether you're at home, work, school, commuting, the airport, etc., and these different devices are designed to suit your needs for accessing content depending on where you are- your situated context." Despite the creation of "black boxes", intended to perform all tasks, the trend is to use devices that can suit the consumer's physical position. Due to the variable utility of portable technology, convergence occurs in high-end mobile devices. They incorporate multimedia services, GPS, Internet access, and mobile telephony into a single device, heralding the rise of what has been termed the "smartphone," a device designed to remove the need to carry multiple devices. Convergence of media occurs when multiple products come together to form one product with the advantages of all of them, also known as the black box. This idea of one technology, concocted by Henry Jenkins, has become known more as a fallacy because of the inability to actually put all technical pieces into one. For example, while people can have e-mail and Internet on their phone, they still want full computers with Internet and e-mail in addition. Mobile phones are a good example, in that they incorporate digital cameras, mp3 players, voice recorders, and other devices. For the consumer, it means more features in less space; for media conglomerates it means remaining competitive.
However, convergence has a downside. Particularly in initial forms, converged devices are frequently less functional and reliable than their component parts (e.g., a mobile phone's web browser may not render some web pages correctly, due to not supporting certain rendering methods, such as the iPhone browser not supporting Flash content). As the number of functions in a single device escalates, the ability of that device to serve its original function decreases. As Rheingold asserts, technological convergence holds immense potential for the "improvement of life and liberty in some ways and (could) degrade it in others". He believes the same technology has the potential to be "used as both a weapon of social control and a means of resistance". Since technology has evolved in the past ten years or so, companies are beginning to converge technologies to create demand for new products. This includes phone companies integrating 3G and 4G on their phones. In the mid 20th century, television converged the technologies of movies and radio, and television is now being converged with the mobile phone industry and the Internet. Phone calls are also being made with the use of personal computers. Converging technologies combine multiple technologies into one. Newer mobile phones feature cameras, and can hold images, videos, music, and other media. Manufacturers now integrate more advanced features, such as video recording, GPS receivers, data storage, and security mechanisms into the traditional cellphone.
Convergence on the internet
The role of the internet has changed from its original use as a communication tool to easier and faster access to information and services, mainly through a broadband connection. The television, radio and newspapers were the world's media for accessing news and entertainment; now, all three media have converged into one, and people all over the world can read and hear news and other information on the internet. The convergence of the internet and conventional TV became popular in the 2010s, through Smart TV, also sometimes referred to as "Connected TV" or "Hybrid TV", (not to be confused with IPTV, Internet TV, or with Web TV). Smart TV is used to describe the current trend of integration of the Internet and Web 2.0 features into modern television sets and set-top boxes, as well as the technological convergence between computers and these television sets or set-top boxes. These new devices most often also have a much higher focus on online interactive media, Internet TV, over-the-top content, as well as on-demand streaming media, and less focus on traditional broadcast media like previous generations of television sets and set-top boxes always have had.
Digital convergence
Digital Convergence means inclination for various innovations, media sources, content that become similar with the time. It enables the convergence of access devices and content as well as the industry participant operations and strategy. This is how this type of technological convergence creates opportunities, particularly in the area of product development and growth strategies for digital product companies. The same can be said in the case of individual content producers such as vloggers in the YouTube video-sharing platform. The convergence in this example is demonstrated in the involvement of the Internet, home devices such as smart television, camera, the YouTube application, and the digital content. In this setup, there are the so-called "spokes", which are the devices that connect to a central hub, which could either be the smart TV or PC. Here, the Internet serves as the intermediary, particularly through its interactivity tools and social networking, in order to create unique mixes of products and services via horizontal integration.
The above example highlights how digital convergence encompasses three phenomena:
previously stand-alone devices are being connected by networks and software, significantly enhancing functionalities;
previously stand-alone products are being converged onto the same platform, creating hybrid products in the process; and,
companies are crossing traditional boundaries such as hardware and software to provide new products and new sources of competition.
Another example is the convergence of different types of digital contents. According to Harry Strasser, former CTO of Siemens "[digital convergence will substantially impact people's lifestyle and work style]".
Convergence in the marketplace
Convergence is a global marketplace dynamic in which different companies and sectors are being brought together, both as competitors and collaborators, across traditional boundaries of industry and technology. In a world dominated by convergence, many traditional products, services and types of companies will become less relevant, but a stunning array of new ones is possible. An array of technology developments act as accelerators of convergence, including mobility, analytics, cloud, digital and social networks. As a disruptive force, convergence is a threat to the unprepared, but a tremendous growth opportunity for companies that can out-innovate and out-execute their ever-expanding list of competitors under dramatically new marketplace rules. With convergence, lines are blurred as companies diversify outside of their original markets. For instance, mobile services are increasingly an important part of the automobile; chemicals companies work with agribusiness; device manufacturers sell music, video and books; booksellers become consumer device companies; search and advertising companies become telecommunications companies ("telcos"); media companies act like telcos and vice versa; retailers act like financial services companies and vice versa; cosmetics companies work with pharmaceutical companies; and more. Mobile phone usage broadens dramatically, enabling users to make payments online, watch videos, or even adjusting their home thermostat while away at work.
Media convergence
Generally, media convergence refers to the merging of both old and new media and can be seen as a product, a system or a process. Jenkins states that convergence is, "the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behaviour of media audiences who would go almost anywhere in search of the kinds of entertainment experiences they wanted" According to Jenkins, there are five areas of convergence: technological, economic, social or organic, cultural and global. So media convergence is not just a technological shift or a technological process, it also includes shifts within the industrial, cultural, and social paradigms that encourage the consumer to seek out new information. Convergence, simply put, is how individual consumers interact with others on a social level and use various media platforms to create new experiences, new forms of media and content that connect us socially, and not just to other consumers, but to the corporate producers of media in ways that have not been as readily accessible in the past. However, Lugmayr and Dal Zotto argued, that media convergence takes place on technology, content, consumer, business model, and management level. They argue, that media convergence is a matter of evolution and can be described through the triadic phenomenon of convergence, divergence, and coexistence. Today's digital media ecosystems coexist, as e.g. mobile app stores provide vendor lock-ins into particular eco-systems; some technology platforms are converging under one technology, due to e.g. the usage of common communication protocols as in digital TV; and other media are diverging, as e.g. media content offerings are more and more specializing and provides a space for niche media.
Advances in technology bring the ability for technological convergence that Rheingold believes can alter the "social-side effects," in that "the virtual, social and physical world are colliding, merging and coordinating." It was predicted in the late 1980s, around the time that CD-ROM was becoming commonplace, that a digital revolution would take place, and that old media would be pushed to one side by new media. Broadcasting is increasingly being replaced by the Internet, enabling consumers all over the world the freedom to access their preferred media content more easily and at a more available rate than ever before.
However, when the dot-com bubble of the 1990s suddenly popped, that poured cold water over the talk of such a digital revolution. In today's society, the idea of media convergence has once again emerged as a key point of reference as newer as well as established media companies attempt to visualize the future of the entertainment industry. If this revolutionary digital paradigm shift presumed that old media would be increasingly replaced by new media, the convergence paradigm that is currently emerging suggests that new and old media would interact in more complex ways than previously predicted. The paradigm shift that followed the digital revolution assumed that new media was going to change everything. When the dot com market crashed, there was a tendency to imagine that nothing had changed. The real truth lay somewhere in between as there were so many aspects of the current media environment to take into consideration. Many industry leaders are increasingly reverting to media convergence as a way of making sense in an era of disorientating change. In that respect, media convergence in theory is essentially an old concept taking on a new meaning. Media convergence, in reality, is more than just a shift in technology. It alters relationships between industries, technologies, audiences, genres and markets. Media convergence changes the rationality media industries operate in, and the way that media consumers process news and entertainment. Media convergence is essentially a process and not an outcome, so no single black box controls the flow of media. With proliferation of different media channels and increasing portability of new telecommunications and computing technologies, we have entered into an era where media constantly surrounds us.
Media convergence requires that media companies rethink existing assumptions about media from the consumer's point of view, as these affect marketing and programming decisions. Media producers must respond to newly empowered consumers.
Conversely, it would seem that hardware is instead diverging whilst media content is converging. Media has developed into brands that can offer content in a number of forms. Two examples of this are Star Wars and The Matrix. Both are films, but are also books, video games, cartoons, and action figures. Branding encourages expansion of one concept, rather than the creation of new ideas. In contrast, hardware has diversified to accommodate media convergence. Hardware must be specific to each function. While most scholars argue that the flow of cross-media is accelerating, O'Donnell suggests, especially between films and video game, the semblance of media convergence is misunderstood by people outside of the media production industry. The conglomeration of media industry continues to sell the same story line in different media. For example, Batman is in comics, films, anime, and games. However, the data to create the image of batman in each media is created individually by different teams of creators. The same character and the same visual effect repetitively appear in different media is because of the synergy of media industry to make them similar as possible. In addition, convergence does not happen when the game of two different consoles is produced. No flows between two consoles because it is faster to create game from scratch for the industry.
One of the more interesting new media journalism forms is virtual reality. Reuters, a major international news service, has created and staffed a news “island” in the popular online virtual reality environment Second Life. Open to anyone, Second Life has emerged as a compelling 3D virtual reality for millions of citizens around the world who have created avatars (virtual representations of themselves) to populate and live in an altered state where personal flight is a reality, altered egos can flourish, and real money (US$1,296,257 were spent during the 24 hours concluding at 10:19 a.m. eastern time January 7, 2008) can be made without ever setting foot into the real world. The Reuters Island in Second Life is a virtual version of the Reuters real-world news service but covering the domain of Second Life for the citizens of Second Life (numbering 11,807,742 residents as of January 5, 2008).
Media convergence in the digital era means the changes that are taking place with older forms of media and media companies. Media convergence has two roles, the first is the technological merging of different media channels – for example, magazines, radio programs, TV shows, and movies, now are available on the Internet through laptops, iPads, and smartphones. As discussed in Media Culture (by Campbell), convergence of technology is not new. It has been going on since the late 1920s. An example is RCA, the Radio Corporation of America, which purchased Victor Talking Machine Company and introduced machines that could receive radio and play recorded music. Next came the TV, and radio lost some of its appeal as people started watching television, which has both talking and music as well as visuals. As technology advances, convergence of media change to keep up. The second definition of media convergence Campbell discusses is cross-platform by media companies. This usually involves consolidating various media holdings, such as cable, phone, television (over the air, satellite, cable) and Internet access under one corporate umbrella. This is not for the consumer to have more media choices, this is for the benefit of the company to cut down on costs and maximize its profits. As stated in the article, Convergence Culture and Media Work, by Mark Deuze, “the convergence of production and consumption of media across companies, channels, genres, and technologies is an expression of the convergence of all aspects of everyday life: work and play, the local and the global, self and social identity."
Convergence culture
Henry Jenkins determines convergence culture to be the flow of content across multiple media platforms, the cooperation between multiple media industries, and the migratory behavior of media audiences who will go almost anywhere in search of the kinds of entertainment experiences they want. The convergence culture is an important factor in transmedia storytelling. Convergence culture introduces new stories and arguments from one form of media into many. Transmedia storytelling is defined by Jenkins as a process "where integral elements of a fiction get dispersed systematically across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience. Ideally, each medium makes its own unique contribution to the unfolding of the story". For instance, The Matrix starts as a film, which is followed by two other instalments, but in a convergence culture it is not constrained to that form. It becomes a story not only told in the movies but in animated shorts, video games and comic books, three different media platforms. Online, a wiki is created to keep track of the story's expanding canon. Fan films, discussion forums, and social media pages also form, expanding The Matrix to different online platforms. Convergence culture took what started as a film and expanded it across almost every type of media. Bert is Evil (images) Bert and Bin Laden appeared in CNN coverage of anti-American protest following September 11. The association of Bert and Bin Laden links back to the Ignacio's Photoshop project for fun.
Convergence culture is a part of participatory culture. Because average people can now access their interests on many types of media they can also have more of a say. Fans and consumers are able to participate in the creation and circulation of new content. Some companies take advantage of this and search for feedback from their customers through social media and sharing sites such as YouTube.
Besides marketing and entertainment, convergence culture has also affected the way we interact with news and information. We can access news on multiple levels of media from the radio, TV, newspapers, and the internet. The internet allows more people to be able to report the news through independent broadcasts and therefore allows a multitude of perspectives to be put forward and accessed by people in many different areas. Convergence allows news to be gathered on a much larger scale. For instance, photographs were taken of torture at Abu Ghraib. These photos were shared and eventually posted on the internet. This led to the breaking of a news story in newspapers, on TV, and the internet.
Media scholar Henry Jenkins has described the media convergence with participatory culture as:
Cell phone convergence
The social function of the cell phone changes as the technology converges. Because of technological advancement, cell phones function as more than just as a phone. They contain an internet connection, video players, MP3 players, gaming, and a camera and their areas of use have increased over time, partly substituting other devices.
A mobile convergence device is one that, if connected to a keyboard, monitor, and mouse, can run applications as a desktop computer would. Convergent operating systems include the GNU/Linux operating systems Ubuntu Touch, Plasma Mobile and PureOS.
Convergence can also refer to being able to run the same app across different devices and being able to develop apps for different devices (such as smartphones, TVs and desktop computers) at once, with the same code base. This can be done via GNU/Linux applications that adapt to the device they're being used on (including native apps designed for such via frameworks like Kirigami) or by the use of multi-platform frameworks like the Quasar framework that use tools such as Apache Cordova, Electron and Capacitor, which can increase the userbase, the pace and ease of development and the number of reached platforms while decreasing development costs.
Social movements
The integration of social movements in cyberspace is one of the potential strategies that social movements can use in the age of media convergence. Because of the neutrality of the internet and the end-to-end design, the power structure of the internet was designed to avoid discrimination between applications. Mexico's Zapatistas campaign for land rights was one of the most influential case in the information age; Manuel Castells defines the Zapatistas as "the first informational guerrilla movement". The Zapatista uprising had been marginalized by the popular press. The Zapatistas were able to construct a grassroots, decentralized social movement by using the internet. The Zapatistas Effect, observed by Cleaver, continues to organize social movements on a global scale. A sophisticated webmetric analysis, which maps the links between different websites and seeks to identify important nodal points in a network, demonstrates that the Zapatistas cause binds together hundreds of global NGOs. The majority of the social movement organized by Zapatistas targets their campaign especially against global neoliberalism. A successful social movement not only need online support but also protest on the street. Papic wrote, "Social Media Alone Do Not Instigate Revolutions", which discusses how the use of social media in social movements needs good organization both online and offline. A study, "Journalism in the age of media convergence: a survey of undergraduates’ technology-related news habits", concluded that several focus group respondents reported they generally did not actively engage in media convergence, such as viewing slide shows or listening to podcast that accompanied an online story, as part of their Web-based news consumption, a significant number of students indicated the interactive features often associated with online news and media convergence were indeed appealing to them.
Examples in regulation
VoIP
The U.S. Federal Communications Commission (FCC) has not been able to decide how to regulate VoIP (Internet Telephony) because the convergent technology is still growing and changing. In addition to its growth, FCC is tentative to set regulation on VoIP in order to promote competition in the telecommunication industry. There is not a clear line between telecommunication service and the information service because of the growth of the new convergent media. Historically, telecommunication is subject to state regulation. The state of California concerned about the increasing popularity of internet telephony will eventually obliterate funding for the Universal Service Fund Some states attempt to assert their traditional role of common carrier oversight onto this new technology. Meisel and Needles (2005) suggests that the FCC, federal courts, and state regulatory bodies on access line charges will directly impact the speed in which Internet telephony market grows. On one hand, the FCC is hesitant to regulate convergent technology because VoIP with different feature from the old Telecommunication; no fixed model to build legislature yet. On the other hand, the regulations is needed because Service over the internet might be quickly replaced telecommunication service, which will affect the entire economy.
Convergence has also raised several debates about classification of certain telecommunications services. As the lines between data transmission, and voice and media transmission are eroded, regulators are faced with the task of how best to classify the converging segments of the telecommunication sector. Traditionally, telecommunication regulation has focused on the operation of physical infrastructure, networks, and access to network. No content is regulated in the telecommunication because the content is considered private. In contrast, film and Television are regulated by contents. The rating system regulates its distribution to the audience. Self-regulation is promoted by the industry. Bogle senior persuaded the entire industry to pay 0.1 percent levy on all advertising and the money was used to give authority to the Advertising Standards Authority, which keeps the government away from setting legislature in the media industry.
The premises to regulate the new media, two-ways communications, concerns much about the change from old media to new media. Each medium has different features and characteristics. First, internet, the new medium, manipulates all form of information – voice, data and video. Second, the old regulation on the old media, such as radio and Television, emphasized its regulation on the scarcity of the channels. Internet, on the other hand, has the limitless capacity, due to the end-to-end design. Third, Two-ways communication encourages interactivity between the content producers and the audiences.
"...Fundamental basis for classification, therefore, is to consider the need for regulation in terms of either market failure or in the public interests"(Blackman). The Electronic Frontier Foundation (EFF), founded in 1990, is a non profit organization to defend free speech, privacy, innovation and consumer rights. DMCA, Digital Millennium Copyright Act regulates and protect the digital content producers and consumers.
Emerging trends in communications
Network neutrality has emerged as an issue. Wu and Lessig (2004) set out two reasons to adapt neutral network model for computer networks. First, "a neutral network eliminates the risk of future discrimination, providing more incentive to invest in broadband application development." Second, "neutral network facilitates fair competition among application, no bias between applications." The two reasons also coincide with FCC's interest to stimulate investment and enhance innovation in broadband technology and services.
Despite regulatory efforts of deregulation, privatization, and liberalization, the infrastructure barrier has been a negative factor in achieving effective competition. "Kim et al. argues that IP dissociates the telephony application from the infrastructure and Internet telephony is at the forefront of such dissociation." The neutrality of the network is very important for fair competition. As the former FCC Charman Michael Powell put it: "From its inception, the Internet was designed, as those present during the course of its creating will tell you, to prevent government or a corporation or anyone else from controlling it. It was designed to defeat discrimination against users, ideas and technologies". Because of these reasons, Shin concludes that regulator should make sure to regulate application and infrastructure separately.
The layered model was first proposed by Solum and Chug, Sicker, and Nakahata. Sicker, Warbach and Witt have supported using a layered model to regulate the telecommunications industry with the emergence of convergence services. Many researchers have different layered approach, but they all agree that the emergence of convergent technology will create challenges and ambiguities for regulations. The key point of the layered model is that it reflects the reality of network architecture, and current business model.
The layered Model consists of 1. Access Layer – where the physical infrastructure resides: copper wires, cable, or fiber optic. 2. transport layer – the provider of service. 3. Application layer – the interface between the data and the users. 4. content layer – the layer which users see. In Convergence Technologies and the Layered Policy Model: Implication for Regulating Future Communications, Shin combines the Layered Model and Network Neutrality as the principle to regulate the future convergent Media Industry.
Messaging
Combination services include those that integrate SMS with voice, such as voice SMS. Providers include Bubble Motion, Jott, Kirusa, and SpinVox. Several operators have launched services that combine SMS with mobile instant messaging (MIM) and presence. Text-to-landline services also exist, where subscribers can send text messages to any landline phone and are charged at standard rates. The text messages are converted into spoken language. This service has been popular in America, where fixed and mobile numbers are similar. Inbound SMS has been converging to enable reception of different formats (SMS, voice, MMS, etc.). In April 2008, O2 UK launched voice-enabled shortcodes, adding voice functionality to the five-digit codes already used for SMS. This type of convergence is helpful for media companies, broadcasters, enterprises, call centres and help desks who need to develop a consistent contact strategy with the consumer. Because SMS is very popular today, it became relevant to include text messaging as a contact possibility for consumers. To avoid having multiple numbers (one for voice calls, another one for SMS), a simple way is to merge the reception of both formats under one number. This means that a consumer can text or call one number and be sure that the message will be received.
Mobile
"Mobile service provisions" refers not only to the ability to purchase mobile phone services, but the ability to wirelessly access everything: voice, Internet, audio, and video. Advancements in WiMAX and other leading edge technologies provide the ability to transfer information over a wireless link at a variety of speeds, distances, and non-line-of-sight conditions.
Multi-play in telecommunications
Multi-play is a marketing term describing the provision of different telecommunication services, such as Internet access, television, telephone, and mobile phone service, by organizations that traditionally only offered one or two of these services. Multi-play is a catch-all phrase; usually, the terms triple play (voice, video and data) or quadruple play (voice, video, data and wireless) are used to describe a more specific meaning. A dual play service is a marketing term for the provisioning of the two services: it can be high-speed Internet (digital subscriber line) and telephone service over a single broadband connection in the case of phone companies, or high-speed Internet (cable modem) and TV service over a single broadband connection in the case of cable TV companies. The convergence can also concern the underlying communication infrastructure. An example of this is a triple play service, where communication services are packaged allowing consumers to purchase TV, Internet, and telephony in one subscription. The broadband cable market is transforming as pay-TV providers move aggressively into what was once considered the telco space. Meanwhile, customer expectations have risen as consumer and business customers alike seek rich content, multi-use devices, networked products and converged services including on-demand video, digital TV, high speed Internet, VoIP, and wireless applications. It is uncharted territory for most broadband companies.
A quadruple play service combines the triple play service of broadband Internet access, television, and telephone with wireless service provisions. This service set is also sometimes humorously referred to as "The Fantastic Four" or "Grand Slam". A fundamental aspect of the quadruple play is not only the long-awaited broadband convergence but also the players involved. Many of them, from the largest global service providers to whom we connect today via wires and cables to the smallest of startup service providers are interested. Opportunities are attractive: the big three telecom services – telephony, cable television, and wireless—could combine their industries. In the UK, the merger of NTL: Telewest and Virgin Mobile resulted in a company offering a quadruple play of cable television, broadband Internet, home telephone, and mobile telephone services.https://futurebusinessboost.com/the-best-future-technology-ideas-in-2021/
Home network
Early in the 21st century, home LAN convergence so rapidly integrated home routers, wireless access points, and DSL modems that users were hard put to identify the resulting box they used to connect their computers to their Internet service. A general term for such a combined device is a residential gateway.
See also
Computer multitasking (the software equivalent of a converged device)
Convergence (telecommunications)
Dongle, can facilitate inclusion of non-converged devices.
Digital rhetoric
Generic Access Network
History of science and technology
UMA Today
IP Multimedia Subsystem (IMS)
Mobile VoIP
Next Generation Networks
Next generation network services
Post-convergent
Second screen
Technology
Further reading
Jenkins, H. Convergence Culture: Where Old and New Media Collide. New York: New York UP, 2006. Print.
Artur Lugmayr, Cinzia Dal Zotto, Media Convergence Handbook, Vol 1. and Vol. 2, Springer-Verlag, 2016
References
External links
Amdocs MultiPlay Strategy WhitePaper
Technology Convergence Update with Bob Brown – Video
Business software
Crossover devices
Digital technology
Digital television
Digital media
History of telecommunications
Information technology
Mass media technology
Network architecture
Network protocols
Science and technology studies
Technological change
Telecommunications systems
Telephony
Television terminology
Technology systems
|
46574112
|
https://en.wikipedia.org/wiki/Internet%20Research%20Agency
|
Internet Research Agency
|
The Internet Research Agency (IRA; translit: Agentstvo Internet-Issledovaniy), also known as Glavset and known in Russian Internet slang as the Trolls from Olgino, is a Russian company engaged in online influence operations on behalf of Russian business and political interests. It is linked to Russian oligarch Yevgeny Prigozhin and based in Saint Petersburg, Russia.
The January 2017 report issued by the United States Intelligence Community – Assessing Russian Activities and Intentions in Recent US Elections – described the Agency as a troll farm: "The likely financier of the so-called Internet Research Agency of professional trolls located in Saint Petersburg is a close ally of [Vladimir] Putin with ties to Russian intelligence," commenting that "they previously were devoted to supporting Russian actions in Ukraine—[and] started to advocate for President-elect Trump as early as December 2015."
The agency has employed fake accounts registered on major social networking sites, discussion boards, online newspaper sites, and video hosting services to promote the Kremlin's interests in domestic and foreign policy including Ukraine and the Middle East as well as attempting to influence the 2016 United States presidential election. More than 1,000 employees reportedly worked in a single building of the agency in 2015.
The extent to which a Russian agency has tried to influence public opinion using social media became better known after a June 2014 BuzzFeed News article greatly expanded on government documents published by hackers earlier that year. The Internet Research Agency gained more attention by June 2015, when one of its offices was reported as having data from fake accounts used for biased Internet trolling. Subsequently, there were news reports of individuals receiving monetary compensation for performing these tasks.
On 16 February 2018, a United States grand jury indicted 13 Russian nationals and 3 Russian entities, including the Internet Research Agency, on charges of violating criminal laws with the intent to interfere "with U.S. elections and political processes", according to the Justice Department. In July 2020, President Trump revealed that he approved a cyberattack against the organization in 2018 that disrupted or shut down its operations.
Origin
The company was founded in mid-2013. In 2013, Novaya Gazeta newspaper reported that Internet Research Agency Ltd's office was in Olgino, a historic district of Saint Petersburg.
The terms "Trolls from Olgino" and "Olgino's trolls" (, "Ольгинские тролли") have become general terms denoting trolls who spread pro-Russian propaganda, not only necessarily those based at the office in Olgino.
Organizers
Strategic
Russian newspaper Vedomosti links the approved-by-Russian-authorities strategy of public consciousness manipulation through new media to Vyacheslav Volodin, first deputy of the Vladimir Putin Presidential Administration of Russia.
Tactical
Journalists have written that Alexey Soskovets, who had participated in the Russian youth political community, was directly connected to the office in Olgino, and that his company, North-Western Service Agency, won 17 or 18 (according to different sources) contracts for organizing celebrations, forums and sport competitions for authorities of Saint Petersburg and that Soskovets' company was the only participant in half of those bids. In mid-2013 the agency won a tender for providing freight services for participants of Seliger camp.
In 2014, according to Russian media, Internet Research Ltd. () was founded in March 2014, joined IRA's activity. The newspaper Novaya Gazeta reported that this company is a successor of Internet Research Agency Ltd. Internet Research Ltd. is considered to be linked to Yevgeny Prigozhin, head of the holding company Concord Management and Consulting. The "Trolls of Olgino" are considered to be his project. As of October 2014, the company belonged to Mikhail Bystrov, who had been the head of the police station at Moscow district of Saint Petersburg.
Russian media point out that according to documents, published by hackers from Anonymous International, Concord Management is directly involved with trolling administration through the agency. Researchers cite e-mail correspondence, in which Concord Management gives instructions to trolls and receives reports on accomplished work. According to journalists, Concord Management organized banquets in the Kremlin and also cooperated with Voentorg and the Russian Ministry of Defence.
Despite links to Alexei Soskovets, Nadejda Orlova, deputy head of the Committee for Youth Policy in Saint Petersburg, disputed a connection between her institution and the trolling offices.
Finnish journalist Jessikka Aro, who reported extensively on the pro-Russian trolling activities in Finland, was targeted by an organized campaign of hate, disinformation and harassment.
Offices
Saint Petersburg
2013: 131 Primorskoye Shosse, Olgino, Saint Petersburg
As reported by Novaya Gazeta, in the end of August 2013, the following message appeared in social networks: "Internet operators wanted! Job at chic office in Olgino!!! (st. Staraya Derevnia), salary 25960 per month (USD$780 as of 2013). Task: posting comments at profile sites in the Internet, writing thematic posts, blogs, social networks. Reports via screenshots. Individual schedule [...] Payment every week, 1180 per shift (from 8.00 to 16.00, from 10.30 to 18.30, from 14.00 to 22.00). PAYMENTS EVERY WEEK AND FREE MEALS!!! Official job placement or according to contract (at will). Tuition possible."
As reported by media and former employees, the office in Olgino, Primorskiy district, St. Peterburg had existed and had been functioning since September 2013. It was situated in a white cottage, 15 minutes by an underground railway from Staraya Derevnia station, opposite Olgino railway station. Workplaces for troll-employees were placed in basement rooms.
2014: 55 Ulitsa Savushkina (Street), Saint Petersburg
According to Russian online newspaper DP.ru, several months before October 2014 the office moved from Olgino to a four-story building at 55 Savushkina Street, Primorskiy district, St. Peterburg. As reported by journalists, the building is officially an uncompleted construction and stayed as such as of March 2015.
A New York Times investigative reporter was told that the Internet Research Agency had shortened its name to "Internet Research," and as of June 2015 had been asked to leave the 55 Savushkina Street location "a couple of months ago" because "it was giving the entire building a bad reputation." A possibly related organization, FAN or Federal News Agency, was located in the building. The New York Times article describes various experiences reported by former employees of the Internet Research Agency at the Savushkina Street location. It also describes several disruptive hoaxes in the US and Europe, such as the Columbian Chemicals Plant explosion hoax, that may be attributable to the Internet Research Agency or similar Russian-based organizations.
1 February 2018: Optikov street, 4, building 3, Lakhta-2 business center, Lakhta, Saint Petersburg
Reported by the Russian online newspaper DP.ru in December 2017, the office moved from the four-story building at 55 Savushkina Street to Lakhta on four floors at , 4 building 3 () near () in the Lakhta-2 business center () on 1 February 2018. Beginning in February 2018, they are also known as the "Lakhta Trolls" ().
Other cities
Novaya Gazeta reported that, according to Alexey Soskovets, head of the office in Olgino, North-Western Service Agency was hiring employees for similar projects in Moscow and other cities in 2013.
Work organization
More than 1,000 paid bloggers and commenters reportedly worked only in a single building at Savushkina Street in 2015. Many other employees work remotely. According to BuzzFeed News, more than 600 people were generally employed in the trolls' office earlier, in June 2014. Each commentator has a daily quota of 100 comments.
Trolls take shifts writing mainly in blogs on LiveJournal and Vkontakte, about subjects along the propaganda lines assigned. Included among the employees are artists who draw political cartoons. They work for 12 hours every other two days. A blogger's quota is ten posts per shift, each post at least 750 characters. A commenter's norm is 126 comments and two posts per account. Each blogger is in charge of three accounts.
Employees at the Olgino office earned 25,000 Russian rubles per month; those at the Savushkina Street office earned approximately 40,000 Russian rubles. In May 2014, Fontanka.ru described schemes for plundering the federal budget, intended to go toward the trolling organization. In 2017 another whistleblower said that with bonuses and long working hours the salary can reach 80,000 rubles.
An employee interviewed by The Washington Post described the work:
According to a 2018 Kommersant article, Yaroslov Ignatovsky (; born 1983, Leningrad) heads Politgen () and is a political strategist that has coordinated the trolls' efforts for Prigozhin.
Trolling themes
According to the testimonies of the investigative journalists and former employees of the offices, the main topics for posts included:
Criticism of Alexei Navalny, his sponsors, and Russian opposition in general;
Criticism of Ukraine's and the United States' foreign policies, and of the top politicians of these states;
Praise for Vladimir Putin and the policy of the Russian Federation.
Praise for and defense of Bashar al-Assad.
The IRA has also leveraged trolls to erode trust in American political and media institutions and showcase certain politicians as incompetent. Journalists have written that themes of trolling were consistent with those of other Russian propaganda outlets in topics and timing. Technical points used by trolls were taken mainly from content disseminated by RT (formerly
Russia Today).
A 2015 BBC News investigation identified the Olgino factory as the most likely producer of a September 2015 "Saiga 410K review" video where an actor posing as U.S. soldier shoots at a book that turns out to be a Quran, which sparked outrage. BBC News found among other irregularities that the soldier's uniform is not used by the U.S. military and is easily purchased in Russia, and that the actor filmed was most likely a bartender from Saint Petersburg related to a troll factory employee.
The citizen-journalism site Bellingcat identified the team from Olgino as the real authors of a video attributed to the Azov Battalion in which masked soldiers threaten the Netherlands for organizing the referendum on the Ukraine–European Union Association Agreement.
Organized anti-Ukrainian campaign
In the beginning of April 2014 there began an organized online campaign to shift public opinion in the Western world in a way that would be useful for Russian authorities regarding the Russian military intervention in Ukraine in 2014. Hacked and leaked documents from that time contain instructions for commenters posting at the websites of Fox News, The Huffington Post, TheBlaze, Politico, and WorldNetDaily. The requirement for the working hours for the trolls is also mentioned: 50 comments under news articles per day. Each blogger has to manage six accounts on Facebook, post at least three posts every day, and participate twice in the group discussions. Other employees have to manage 10 accounts on Twitter, publishing 50 tweets every day. Journalists concluded that Igor Osadchiy was a probable leader of the project, and the campaign itself was run by Internet Research Agency Ltd. Osadchiy denied his connection to the agency.
The company is also one of the main sponsors of an anti-Western exhibition Material Evidence.
In the beginning of 2016, Ukraine's state-owned news agency Ukrinform claimed to expose a system of bots in social networks, which called for violence against the Ukrainian government and for starting "The Third Maidan". They reported that the organizer of this system is the former anti-Ukrainian combatant Sergiy Zhuk from Donbass. He allegedly performed his Internet activity from Vnukovo District in Moscow.
Reactions
Foreign
In March 2014, the Polish edition of Newsweek expressed suspicion that Russia was employing people to "bombard" its website with pro-Russian comments on Ukraine-related articles. Poland's governmental computer emergency response team later confirmed that pro-Russia commentary had flooded Polish Internet portals at the start of the Ukrainian crisis. German-language media websites were also flooded with pro-Russia comments in the spring of 2014.
In late May 2014, the hacker group Anonymous International began publishing documents received from hacked emails of Internet Research Agency managers.
In May–June 2014, Internet trolls invaded news media sites and massively posted pro-Russian comments in broken English.
In March 2015 a service enabling censorship of sources of anti-Ukrainian propaganda in social networks inside Ukraine was launched.
The United States Justice Department announced the indictment on 16 February 2018, of the Internet Research Agency while also naming more than a dozen individual suspects who allegedly worked there as part of the special counsel's investigation into criminal interference with the 2016 election.
Assessments
Russian bloggers Anton Nosik, Rustem Adagamov, and Dmitriy Aleshkovskiy have said that paid Internet-trolls don't change public opinion. Their usage is just a way to steal budget money. The political scientist Thomas Rid has said that the IRA was the least effective of all Russia's interference campaigns in the 2016 U.S. election, despite its outsized press coverage, and that it made no measurable impact on American voters.
Leonid Volkov, a politician working for Alexei Navalny's Anti-Corruption Foundation, suggests that the point of sponsoring paid Internet trolling is to make the Internet so distasteful that ordinary people are not willing to participate. The Columbian Chemicals Plant explosion hoax on 11 September 2014, was the work of Internet Research Agency.
Additional activities of organizers
Based on the documents published by Anonymous International, Concord Management and Consulting was linked to the funding of several media outlets in Ukraine and Russia, including Kharkiv News Agency, News of Neva, Newspaper About Newspapers, Business Dialog, and Journalist Truth.
The Columbian Chemicals Plant explosion hoax of 11 September 2014, which claim an explosion at a chemical plant in Centerville, St. Mary Parish, Louisiana, has been attributed in June 2015, by The New York Times Magazine, as "a highly coordinated disinformation campaign" and that the "virtual assault" was the work of the Internet Research Agency.
Three months later, the same accounts posted false messages on Twitter about an Ebola outbreak in Atlanta under the keyword #EbolaInAtlanta, quickly relayed and picked up by users living in the city. A video was then posted on YouTube, showing a medical team treating an alleged Ebola victim at Atlanta Airport. On the same day, a different group launched a rumor on Twitter under the keyword #shockingmurderinatlanta, reporting the death of a disarmed black woman shot by police. Again, a blurry and poorly filmed video is broadcast to support the rumor.
Between July 2014 and September 2017, the IRA used bots and trolls on Twitter to sow discord about the safety of vaccines. The campaign used sophisticated Twitter bots to amplify highly polarizing pro-vaccine and anti-vaccine messages containing the hashtag #VaccinateUS.
In September 2017 Facebook said that ads had been "geographically targeted". Facebook revealed that during the 2016 United States presidential election, IRA had purchased advertisements on the website for US$100,000, 25% of which were geographically targeted to the U.S. Facebook's chief security officer said that the ads "appeared to focus on amplifying divisive social and political messages across the ideological spectrum".
According to a 17 October 2017 BuzzFeed News report, IRA duped American activists into taking real action via protests and self-defense training in what would seem to be a further attempt to exploit racial grievances.
On 16 February 2018, the Internet Research Agency, along with 13 Russian individuals and two other Russian organizations, was indicted following an investigation by Special Counsel Robert Mueller with charges stemming from "impairing, obstructing, and defeating the lawful functions of government."
On 23 March 2018, The Daily Beast revealed new details about IRA gathered from leaked internal documents, which showed that IRA used Reddit and Tumblr as part of its influence campaign. On the same day, Tumblr announced that they had banned 84 accounts linked to IRA, saying that they had spread misinformation through conventional postings rather than advertisements.
In October 2018 the US Justice Department filed charges against Russian accountant Elena Khusyaynova for working with the IRA to influence not only the 2016 elections but also the upcoming 2018 midterm elections.
Rallies and protests organized by IRA in the United States
On 4 April 2016, a rally in Buffalo, New York, protested the death of India Cummings, a black woman who had recently died in police custody. IRA's "Blacktivist" Facebook account actively promoted the event and reached out directly to local activists on Facebook Messenger asking them to circulate petitions and print. "Blacktivist" supplied the petitions and poster artwork.
On 16 April 2016, a rally protesting the death of Freddie Gray attracted large crowds in Baltimore. IRA's "Blacktivist" Facebook group promoted and organized the event, including reaching out to local activists.
On 23 April 2016, a small group of white-power demonstrators held a rally they called "Rock Stone Mountain" at Stone Mountain Park near Stone Mountain, Georgia. They were confronted by a large group of anti-racist counterprotestors, and some violent clashes ensued. The protest was heavily promoted by IRA accounts on Tumblr, Twitter, and Facebook, and the IRA website blackmatters.com. The IRA used its Blacktivist Facebook account to reach out, to no avail, to activist and academic Barbara Williams Emerson, the daughter of Hosea Williams, to help promote the protests. Afterward, RT blamed anti-racists for violence and promoted two videos shot at the event.
On 2 May 2016, a second rally was held in Buffalo, New York, protesting the death of India Cummings. Like the 4 April rally, the event was heavily promoted by IRA's "Blacktivist" Facebook account, including attempted outreach to local activists.
On 21 May 2016, two competing rallies were held in Houston to alternately protest against and defend the recently opened Library of Islamic Knowledge at the Islamic Da'wah Center. The "Stop Islamization of Texas" rally was organized by the Facebook group "Heart of Texas". The posting for the event encouraged participants to bring guns. A spokesman for the group conversed with the Houston Press via email but declined to give a name. The other rally, "Save Islamic Knowledge", was organized by another Facebook group called "United Muslims of America" for the same time and location. Both Facebook groups were later revealed to be IRA accounts.
On 25 May 2016, the Westboro Baptist Church held its annual protest of Lawrence High School graduation ceremonies in Lawrence, Kansas. The "LGBT United" Facebook group organized a counter protest to confront the Westboro Baptist Church protest, including by placing an ad on Facebook and contacting local people. About a dozen counter showed up. Lawrence High School students did not participate in the counter protest because they were skeptical of the counter protest organizers. "LGBT United" was an IRA account that appears to have been created specifically for this event.
"LGBT United" organized a candlelight vigil on 25 June 2016, for the Pulse nightclub shooting victims in Orlando, Florida.
IRA's "Don't Shoot" Facebook group and affiliated "Don't Shoot Us" website tried to organize a protest outside St. Paul, Minnesota police headquarters on 10 July 2016, in response to the 6 July fatal police shooting of Philando Castile. Some local activists became suspicious of the motives behind the event because St. Paul police were not involved in the shooting. Castille had been shot by a St. Anthony police officer in nearby Falcon Heights. Local activists contacted "Don't Shoot." After being pressed on who they were and who supported them, "Don't Shoot" agreed to move the protest to St. Anthony police headquarters. The concerned local activists investigated further and urged not to participate after deciding "Don't Shoot" was a "total troll job." "Don't Shoot" organizers eventually relinquished control of the event to local organizers, who subsequently declined to accept any money offered by "Don't Shoot" to cover expenses.
A Black Lives Matter protest rally was held in Dallas on 10 July 2016. A "Blue Lives Matter" counter protest was held across the street. The "Blue Lives Matter" protest was organized by the "Heart of Texas" Facebook group controlled by IRA.
The Blacktivist Facebook group organized a rally in Chicago to honor Sandra Bland on 16 July 2016, the first anniversary of her death. The rally was held in front of the Chicago Police Department's Homan Square facility. They passed around petitions calling for a Civilian Police Accountability Council ordinance.
17 "Florida Goes Trump" rallies were held across Florida on 25 August 2016. The rallies were organized by IRA using their "Being Patriotic" Facebook group and "march_for_trump" Twitter account.
The "SecuredBorders" Facebook group organized the "Citizens before refugees" protest rally on 27 August 2016, at the City Council Chambers in Twin Falls, Idaho. Only a small number of people showed up for the three hour event, most likely because it was Saturday and the Chambers were closed. "SecureBorders" was an IRA account.
The "Safe Space for Muslim Neighborhood" rally was held outside the White House on 3 September 2016. At least 57 people attended the event organized by the IRA's "United Muslims of America" Facebook group.
"BlackMattersUS", an IRA website, recruited activists to participate in protests on the days immediately following 20 September 2016, police shooting of Keith Lamont Scott in Charlotte, North Carolina. The IRA paid for expenses such as microphones and speakers.
The "Miners for Trump" rallies held in Pennsylvania on 2 October 2016, were organized by IRA's "Being Patriotic" Facebook group.
The IRA ran its most popular ad on Facebook on 19 October 2016. The ad was for the IRA's Back the Badge Facebook group and showed a badge with the words "Back the Badge" in front of police lights under the caption "Community of people who support our brave Police Officers."
A large rally was held in Charlotte, North Carolina, on 22 October 2016, protesting the police shooting of Keith Lamont Scott. BlackMattersUS recruited unwitting local activists to organize the rally. BlackMattersUS provided one activist with a bank card to pay for rally expenses.
Anti-Hillary Clinton "Texit" rallies were held across Texas on 5 November 2016. The "Heart of Texas" Facebook group organized the rallies around the theme of Texas seceding from the United States if Hillary Clinton is elected. The group contacted the Texas Nationalist Movement, a secessionist organization, to help with organizing efforts, but they declined to help. Small rallies were held in Dallas, Fort Worth, Austin, and other cities. No one attended the Lubbock rally.
A Trump protest called "Trump is NOT my President" attracted 5,000 to 10,000 in Manhattan on 12 November 2016. Marched from Union Square to Trump Tower. The protest was organized by BlackMattersUS.
The IRA's "United Muslims of America" Facebook group organized the "Make peace, not war!" protest on 3 June 2017, outside Trump Tower in New York City. It is unclear whether anyone attended this protest or instead attended the "March for Truth" affiliated protest held on the same day.
Lawsuit
In May 2015, a trolling company employee Lyudmila Savchuk in Saint Petersburg sued her employer for labor violations, seeking to disclose its activities. Ivan Pavlov from human rights defending initiative Team 29 represented Savchuk, and the defendant "troll-factory" agreed to pay Savchuk her withheld salaries and to restore her job.
Savchuk later described extreme psychological pressure at the work place, with jokes circulating among employees that "one can remain sane in the factory for two months maximum", as result of constant switching between different personalities that the workers are expected to design and maintain during work time.
Indictments
On 16 February 2018, 13 individuals were indicted by the Washington, D.C. grand jury for alleged illegal interference in the 2016 presidential elections, during which they strongly supported the candidacy of Donald Trump, according to special counsel Robert Mueller's office. IRA, Concord Management and Concord Catering were also indicted. It was alleged that IRA was controlled by Yevgeny Prigozhin, a wealthy associate of Russian President Vladimir Putin.
The indicted individuals are Dzheykhun Nasimi Ogly Aslanov, Anna Vladislavovna Bogacheva, Maria Anatolyevna Bovda, Robert Sergeyevich Bovda, Mikhail Leonidovich Burchik, Mikhail Ivanovich Bystrov, Irina Viktorovna Kaverzina, Aleksandra Yuryevna Krylova, Vadim Vladimirovich Podkopaev, Sergey Pavlovich Polozov, Yevgeny Viktorovich Prigozhin, Gleb Igorevitch Vasilchenko, and Vladimir Venkov. ll of the defendants are charged with conspiracy to defraud the United States, 3 are charged with conspiracy to commit wire fraud and bank fraud, and 5 defendants are charged with aggravated identity theft. None of the defendants are in custody.
On 15 March, President Trump imposed financial sanctions under the Countering America's Adversaries Through Sanctions Act on the 13 Russian and organizations indicted by Mueller, preventing them from entering the United States to answer the charges should they wish to.
In October 2018 Russian accountant Elena Khusyaynova was charged with interferеnce in the 2016 and 2018 US elections. She is alleged to have been working with the IRA. She was said to have managed a $16 million budget.
Timeline of the Internet Research Agency interference in United States elections
2014
April: The IRA creates a department called the "translator project". The department's focus is on interfering in the U.S. election.
May: The IRA begins its election interference campaign of "spread[ing] distrust towards the candidates and the political system in general."
4–26 June: Aleksandra Krylova and Anna Bogacheva, two IRA employees, travel to the U.S. to collect intelligence. Maria Bovda, a third employee, is denied a visa. All three are indicted in February 2018 for their work on election interference.
11 September: The IRA spreads a hoax they created about a fictitious chemical plant fire in Centerville, St. Mary Parish, Louisiana, purportedly started by ISIS. The hoax includes tweets and YouTube videos showing a chemical plant fire. Centerville is home to many chemical plants, but the plant named in the tweets does not exist. Initial tweets are sent directly to politicians, journalists, and Centerville residents.
21 September – 11 October: The Material Evidence art exhibition is displayed at the Art Beam gallery in the Chelsea neighborhood of New York City. It portrays the conflicts in Syria and Ukraine in a pro-Russian light. It is promoted by Twitter accounts that also spread the September 11 chemical plant fire hoax. The exhibition is partly funded by the IRA.
13 December:
The IRA uses Twitter to spread a hoax about an Ebola outbreak in Atlanta. Many of the Twitter accounts used in the September 11 chemical plant fire hoax also spread this hoax. The hoax includes a YouTube video of medical workers wearing hazmat suits.
Using a different set of Twitter accounts, the IRA spreads a hoax about a purported police shooting of an unarmed black woman in Atlanta. The hoax includes a blurry video of the purported event.
2015
July onward: Thousands of fake Twitter accounts run by the IRA begin to praise Trump over his political opponents by a wide margin, according to a later analysis by The Wall Street Journal.
3 November:The IRA Instagram account "Stand For Freedom" attempts to organize a confederate rally in Houston, Texas, on 14 November. It is unclear if anyone showed up. The Mueller Report identifies this as the IRA's first attempt to organize a U.S. rally.
19 November: The IRA creates the @TEN_GOP Twitter account. Purporting to be the "Unofficial Twitter account of Tennessee Republicans," it peaks at over 100,000 followers.
2016
10 February: IRA instructs workers to "use any opportunity to criticize Hillary and the rest (except Sanders and Trump—we support them)."
April: The IRA starts buying online ads on social media and other sites. The ads support Trump and attack Clinton.
4 April: A rally is held in Buffalo, New York, protesting the death of India Cummings. Cummings was a black woman who had recently died in police custody. The IRA's "Blacktivist" account on Facebook actively promotes the event, reaching out directly to local activists on Facebook Messenger asking them to circulate petitions and print posters for the event. Blacktivist supplies the petitions and poster artwork.
16 April: A rally protesting the death of Freddie Gray attracts large crowds in Baltimore. The IRA's Blacktivist Facebook group promotes and organizes the event, including reaching out to local activists.
19 April: The IRA purchases its first pro-Trump ad through its "Tea Party News" Instagram account. The Instagram ad asks users to upload photos with the hashtag #KIDS4TRU to "make a patriotic team of young Trump supporters."
23 April: A small group of white-power demonstrators hold a rally they call "Rock Stone Mountain" at Stone Mountain Park near Stone Mountain, Georgia. They are confronted by a large group of protesters, and some violent clashes ensue. The counterprotest was heavily promoted by IRA accounts on Tumblr, Twitter, and Facebook, and the IRA website blackmatters.com. The IRA uses its Blacktivist account on Facebook to reach out, to no avail, to activist and academic Barbara Williams Emerson, the daughter of Hosea Williams, to help promote the protests. Afterward, RT blames anti-racist protesters for violence and promotes two videos shot at the event.
2 May: A second rally is held in Buffalo, New York, protesting the death of India Cummings. Like the 4 April rally, the event is heavily promoted by the IRA's Blacktivist Facebook account, including attempted outreach to local activists.
21 May: Two competing rallies are held in Houston to alternately protest against and defend the recently opened Library of Islamic Knowledge at the Islamic Da'wah Center. The "Stop Islamization of Texas" rally is organized by the Facebook group "Heart of Texas". The Facebook posting for the event encourages participants to bring guns. A spokesman for the group converses with the Houston Press via email but declines to give a name. The other rally, "Save Islamic Knowledge", is organized by the Facebook group "United Muslims of America" for the same time and location. Both Facebook groups are later revealed to be IRA accounts.
29 May: The IRA hires an American to pose in front of the White House holding a sign that says, "Happy 55th Birthday, Dear Boss." "Boss" is a reference to Russian oligarch Yevgeny Prigozhin.
1 June: The IRA plans a Manhattan rally called "March for Trump" and buys Facebook ads promoting the event.
4 June: The IRA email account [email protected] sends news releases about the "March for Trump" rally to New York City media outlets.
5 June: The IRA contacts a Trump campaign volunteer to provide signs for the "March for Trump" rally.
23 June: The IRA persona "Matt Skiber" contacts an American to recruit for the "March for Trump" rally.
24 June: The IRA group "United Muslims of America" buys Facebook ads for the "Support Hillary, Save American Muslims" rally.
25 June:
The IRA's "March for Trump" rally occurs.
The IRA Facebook group LGBT United organizes a candlelight vigil for the Pulse nightclub shooting victims in Orlando, Florida.
July: The IRA's translator project grows to over 80 employees.
Summer: IRA employees use the stolen identities of four Americans to open PayPal and bank accounts to act as conduits for funding their activities in the United States.
5 July: "United Muslims of America", an IRA group, orders posters with fake Clinton quotes promoting Sharia Law. The posters are ordered for the "Support Hillary, Save American Muslims" rally they are organizing.
6–10 July: The IRA's "Don't Shoot" Facebook group and affiliated "Don't Shoot Us" website try to organize a protest outside the St. Paul, Minnesota, police headquarters on 10 July in response to the 6 July fatal police shooting of Philando Castile. Some local activists become suspicious of the event because St. Paul police were not involved in the shooting: Castile was shot by a St. Anthony police officer in nearby Falcon Heights. Local activists contact Don't Shoot. After being pressed on who they are and who supports them, Don't Shoot agrees to move the protest to the St. Anthony police headquarters. The concerned local activists investigate further and urge protesters not to participate after deciding Don't Shoot is a "total troll job." Don't Shoot organizers eventually relinquish control of the event to local organizers, who subsequently decline to accept any money from Don't Shoot.
9 July: The "Support Hillary, Save American Muslims" rally occurs in Washington, D.C. The rally is organized by the IRA group "United Muslims of America."
10 July: A Black Lives Matter protest rally is held in Dallas. A "Blue Lives Matter" counterprotest is held across the street. The Blue Lives Matter protest is organized by the "Heart of Texas" Facebook group, controlled by the IRA.
12 July: An IRA group buys ads on Facebook for the "Down with Hillary" rally in New York City.
16 July: The IRA's Blacktivist group organizes a rally in Chicago to honor Sandra Bland on the first anniversary of her death. The rally is held in front of the Chicago Police Department's Homan Square building. Participants pass around petitions calling for a Civilian Police Accountability Council ordinance.
23 July: The IRA-organized "Down with Hillary" rally is held in New York City. The agency sends 30 news releases to media outlets using the email address [email protected].
2–3 August: The IRA's "Matt Skiber" persona contacts the real "Florida for Trump" Facebook account. The "T.W." persona contacts other grassroots groups.
4 August:
The IRA's Facebook account "Stop AI" accuses Clinton of voter fraud during the Iowa Caucuses. They buy ads promoting the post.
IRA groups buy ads for the "Florida Goes Trump" rallies. The 8,300 people who click on the ads are sent to the Agency's "Being Patriotic" Facebook page.
5 August: The IRA Twitter account @March_For_Trump hires an actress to play Hillary Clinton in prison garb and someone to build a cage to hold the actress. The actress and cage are to appear at the "Florida Goes Trump" rally in West Palm Beach, Florida on 20 August.
11 August: The IRA Twitter account @TEN_GOP claims that voter fraud is being investigated in North Carolina.
12–18 August: The IRA's persona "Josh Milton" communicates with Trump Campaign officials via email to request Trump/Pence signs and the phone numbers of campaign affiliates as part of an effort to organize pro-Trump campaign rallies in Florida.
15 August: A Trump campaign county chair contacts the IRA through their phony email accounts to suggest locations for rallies.
16 August: The IRA buys ads on Instagram for the "Florida Goes Trump" rallies.
18 August:
The IRA uses its [email protected] email account to contact a Trump campaign official in Florida. The email requests campaign support at the forthcoming "Florida Goes Trump" rallies. It is unknown whether the campaign official responded.
The IRA pays the person they hired to build a cage for a "Florida Goes Trump" rally in West Palm Beach, Florida.
19 August:
A Trump supporter suggests to the IRA Twitter account "March for Trump" that it contact a Trump campaign official. The official is emailed by the agency's [email protected] account.
The IRA's "Matt Skiber" persona contacts another Trump campaign official on Facebook.
20 August: 17 "Florida Goes Trump" rallies are held across Florida. The rallies are organized by Russian trolls from the IRA.
27 August: The IRA Facebook group "SecuredBorders" organizes a "Citizens before refugees" protest rally at the City Council Chambers in Twin Falls, Idaho. Only a small number of people show up for the three-hour event, most likely because it is Saturday and the Chambers are closed.
31 August:
An American contacts the IRA's "Being Patriotic" account about a possible 11 September event in Miami.
The IRA buys ads for a 11 September rally in New York City.
3 September: The IRA Facebook group "United Muslims of America" organizes a "Safe Space for Muslim Neighborhood" rally outside the White House, attracting at least 57 people.
9 September: The IRA sends money to its American groups to fund the 11 September rally in Miami, and to pay the actress who portrayed Clinton at the West Palm Beach, Florida, rally.
20–26 September: BlackMattersUS, an IRA website, recruits activists to participate in protests over the police shooting of Keith Lamont Scott in Charlotte, North Carolina. The IRA pays for expenses such as microphones and speakers.
22 September: The IRA buys ads on Facebook for "Miners for Trump" rallies in Pennsylvania.
2 October: "Miners for Trump" rallies are held across Pennsylvania. The IRA uses the same techniques to organize the rallies as they used for the "Florida Goes Trump" rallies, including hiring a person to wear a Clinton mask and a prison uniform.
16 October: The IRA's Instagram account "Woke Blacks" makes a post aimed at suppressing black voter turnout.
19 October The IRA runs its most popular ad on Facebook. The ad is for the IRA's Back the Badge Facebook group and shows a badge with the words "Back the Badge" in front of police lights under the caption "Community of people who support our brave Police Officers."
22 October: A large rally is held in Charlotte, North Carolina, protesting the police shooting of Keith Lamont Scott. The IRA website BlackMattersUS recruits unwitting local activists to organize the rally. BlackMattersUS provides an activist with a bank card to pay for rally expenses.
2 November: The IRA Twitter account @TEN_GOP alleges "#VoterFraud by counting tens of thousands of ineligible mail in Hillary votes being reported in Broward County, Florida." Trump Jr. retweets it.
3 November: The IRA Instagram account "Blacktivist" suggests people vote for Stein instead of Clinton.
5 November: Anti-Clinton "Texit" rallies are held across Texas. The IRA's "Heart of Texas" Facebook group organizes the rallies around the theme of Texas seceding from the United States if Clinton is elected. The group contacts the Texas Nationalist Movement, a secessionist organization, to help with organizing efforts, but they decline to help. Small rallies are held in Dallas, Fort Worth, Austin, and other cities. No one attends the Lubbock rally.
8 November: Hours after the polls close, the hashtag #Calexit is retweeted by thousands of IRA accounts.
11 November: A large banner is hung from the Arlington Memorial Bridge in Washington, D.C., showing a photo of Obama with the words "Goodbye Murderer" at the bottom. The IRA Twitter account @LeroyLovesUSA takes credit and is an early promoter of the banner.
12 November: A Trump protest called "Trump is NOT my President" attracts 5,000–10,000 protestors in Manhattan who march from Union Square to Trump Tower. The protest is organized by the IRA using their BlackMattersUS Facebook account.
19 November: The IRA organizes the "Charlotte Against Trump" rally in Charlotte, North Carolina.
8 December: The IRA runs an ad on Craigslist to hire someone to walk around New York City dressed as Santa Claus while wearing a Trump mask.
2017
9 April: The Internet Research Agency(IRA)'s "United Muslims of America" Facebook group posts a meme complaining about the cost of the 6 April missile strike on Syria by the United States. The strike had been made in retaliation for a chemical weapons attack by the Syrian government. The meme asserts the $93 million cost of the strike "could have founded [sic] Meals on Wheels until 2029."
3 June: The IRA's "United Muslims of America" Facebook group organizes the "Make peace, not war!" protest outside Trump Tower in New York City. It is unclear whether anyone attends this protest or instead attends the "March for Truth" affiliated protest held on the same day.
Thousands of people participate in the "Protest Trump and ideology of hate at Trump Tower!" protest outside Trump Tower in New York City. The protest was organized by the "Resisters" group on Facebook, one of the "bad actor" groups identified by Facebook in July 2018 as possibly belonging to the IRA.
23 August: The Internet Research Agency's @TEN_GOP Twitter account is closed.
6 September: Facebook admits selling advertisements to Russian companies seeking to reach U.S. voters. Hundreds of accounts were reportedly tied to the Internet Research Agency. Facebook pledges full cooperation with Mueller's investigation, and begins to provide details on purchases from Russia, including identities of the people involved.
9 September: Thousand of people participate in the "We Stand with DREAMers! Support DACA!" rally in New York City. The rally was organized by the "Resisters" group on Facebook, one of the "bad actor" groups identified by Facebook in July 2018 as possibly belonging to the IRA.
9 September: Trump responds to a tweet from @10_gop, the "backup" account for the now-closed IRA account @TEN_GOP, saying, "THANK YOU for your support Miami! My team just shared photos from your TRUMP SIGN WAVING DAY, yesterday! I love you- and there is no question - TOGETHER, WE WILL MAKE AMERICA GREAT AGAIN!" The response is to an @10_gop tweet that simply reads "we love you Mr. President."
28 September:
Twitter announces that it identified 201 non-bot accounts tied to the IRA.
Democrats rebuke Twitter for its "frankly inadequate" response to Russian meddling.
Mother Jones writes that "fake news on Twitter flooded swing states that helped Trump win."
23 October: The Daily Beast reports that Greenfloid LLC, a tiny web hosting company registered to Sergey Kashyrin and two others, hosted IRA propaganda websites DoNotShoot.Us, BlackMattersUS.com and others on servers in a Staten Island neighborhood. Greenfloid is listed as the North American subsidiary of ITL, a hosting company based in Kharkiv, Ukraine, registered to Dmitry Deineka. Deineka gave conflicting answers when questioned by The Daily Beast about the IRA websites.
1 November: Twitter tells the Senate Intelligence Committee that it has found 2,752 IRA accounts and 36,746 Russia-linked bot accounts involved in election-related retweets.
2018
16 February: Mueller indicts 13 Russian citizens, IRA/Glavset and two other Russian entities in a 37-page indictment returned by a federal grand jury in the District of Columbia.
A 15 July Business Insider article revealed a new Russian intelligence-linked "news" site, USAReally, which follows in the footsteps of previous Russian IRA-backed troll farms, and appears to be an attempt to "test the waters" ahead of the mid-terms.
31 July: Facebook announces they have shut down eight pages, 17 profiles, and seven Instagram accounts related to "bad actors" identified recently with activity profiles similar to the IRA. The company says it doesn't have enough information to attribute the accounts, groups, and events to the IRA, but that a known IRA account was briefly an administrator of the "Resisters" group. The "Resisters" group was the first organizer on Facebook of the upcoming "No Unite The Right 2 - DC" protest scheduled in Washington, D.C., for 10 August. Some of the event's other organizers insist they started organizing before "Resisters" created the event's Facebook page.
25 September: The New York Times reports that the Moscow-based news website "USAReally.com" appears to be a continuation of the IRA's fake news propaganda efforts targeting Americans. The site, launched in May, has been banned from Facebook, Twitter, and Reddit. A new Facebook page created by the site is being monitored by Facebook.
12 September: The Wall Street Journal reports that nearly 600 IRA Twitter accounts posted nearly 10,000 mostly conservative-targeted messages about health policy and Obamacare from 2014 through May 2018. Pro-ObamaCare messages peaked around the spring of 2016 when Senator Bernie Sanders and Hillary Clinton were fighting for the Democratic Party presidential nomination. Anti-Obamacare messages peaked during the debates leading up to the attempted repeal of the Affordable Care Act in the spring of 2017.
On 19 October, The US Justice Department charges 44-year old Russian accountant Elena Alekseevna Khusyaynova of Saint Petersburg with conspiracy to defraud the United States by managing the finances of the social media troll operation, including the IRA, that attempted to interfere with the 2016 and 2018 US elections.
20 November: The Federal Agency of News (FAN) sues Facebook in the U.S. District Court for the Northern District of California for violating its free speech rights by closing its account in April. The FAN is a sister organization to the IRA that operates from the same building in St. Petersburg. The FAN claims in its filing that it has no knowledge of the IRA, even though some current FAN employees were indicted by Mueller for their work with the IRA.
2019
12 April: The Washington Post'' reports that researchers at Clemson University found the IRA sent thousands of tweets during the 2016 election campaign in an attempt to drive Bernie Sanders supporters away from Hillary Clinton and towards Donald Trump.
2020
12 March''': CNN's Clarissa Ward reveals that Russia and the IRA have been running “troll factories” based in Nigeria and Ghana, with the aim to disrupt the 2020 presidential campaign.
Notes
See also
Operation Earnest Voice
50 Cent Party
Active measures
AK Trolls
Astroturfing
CyberBerkut
Fake news website
Internet manipulation
Internet Water Army
Public opinion brigades
Russian interference in the 2016 United States elections
Web brigades
References
Further reading
External links
Russian propaganda organizations
Internet manipulation and propaganda
Companies based in Saint Petersburg
Psychological warfare
Politics of Russia
Internet governance
War in Donbas
Internet trolling
Organizations associated with Russian interference in the 2016 United States elections
Russian–Ukrainian cyberwarfare
Russian entities subject to the U.S. Department of the Treasury sanctions
|
63440093
|
https://en.wikipedia.org/wiki/The%20Hype%20House
|
The Hype House
|
The Hype House is a collective of teenage TikTok personalities based in Los Angeles, California, as well as the name of the mansion where some of the creators live. It is a collaborative content creation house, allowing the different influencers and content creators to make videos together easily. Current members include Thomas Petrou, Mia Hayward, Alex Warren, Kouvr Annon, Jack Wright, Chase Hudson, Vinnie Hacker and more.
The house itself is a Spanish-style Mansion perched at the top of a gated street, containing a palatial backyard, pool, and a large kitchen and dining quarters.
History
The collective formed in December 2019, and includes around twenty rising or established Gen Z influencers from TikTok. Most of its funding for creation came from Daisy Keech, Chase Hudson and Thomas Petrou, Charli D'Amelio, Dixie D'Amelio, and Addison Rae. During its peak in membership, it had twenty-one members until founding member Daisy Keech left in March 2020, citing internal disputes with other members as the reason for her departure. In May 2020, the D'Amelios' representative confirmed the sisters also left the collective when "The Hype House started to become more of a business." Larray, who was already an established YouTuber and TikTok personality, joined in January 2020, but confirmed in his livestream that he had left later that year. Vinnie Hacker, an internet sensation, joined the house in January 2021, which was a surprise to a lot of the Hype House's fan base. Russian model Renata Valliulina (also known as Renata Ri) joined the house in December of that year.
Reality series
On April 22, 2021, Netflix announced that they would be airing a reality series at The Hype House, starring Annon, Dragun, Hacker, Hayward, Hudson, Merritt, Petrou, Warren, and Wright. Hype House premiered on Netflix on January 7, 2022.
Controversies
On July 21, 2020, Nikita Dragun held a surprise birthday party for Larray during the COVID-19 pandemic at the Hype House mansion. The party included internet celebrities such as James Charles and others. At the time of the party, California's COVID-19 cases had just surpassed New York's cases. There was an estimated 67 people in attendance, many of whom were seen without face masks despite local health laws. Photos and videos of the event appeared on social media sites such as Instagram. These posts drew criticism from the public, including other influencers like Elijah Daniel and Tyler Oakley. Merritt, and some of the other attendees of the party later apologized. Residents of The Hype House later tested negative for COVID-19.
References
External links
2019 establishments in California
American Internet groups
TikTok
|
3534379
|
https://en.wikipedia.org/wiki/NSLU2
|
NSLU2
|
The NSLU2 (Network Storage Link for USB 2.0 Disk Drives) is a network-attached storage (NAS) device made by Linksys introduced in 2004 and discontinued in 2008. It makes USB flash memory and hard disks accessible over a network using the SMB protocol (also known as Windows file sharing or CIFS). It was superseded mainly by the NAS200 (enclosure type storage link) and in another sense by the WRT600N and WRT300N/350N which both combine a Wi-Fi router with a storage link.
The device runs a modified version of Linux and by default, formats hard disks with the ext3 filesystem, but a firmware upgrade from Linksys adds the ability to use NTFS and FAT32 formatted drives with the device for better Windows compatibility. The device has a web interface from which the various advanced features can be configured, including user and group permissions and networking options.
Hardware
The device has two USB 2.0 ports for connecting hard disks and uses an ARM-compatible Intel XScale IXP420 CPU. In models manufactured prior to around April 2006, Linksys had underclocked the processor to 133 MHz, though a simple hardware modification to remove this restriction is possible. Later models (circa. May 2006) are clocked at the rated speed of 266 MHz. The device includes 32 MB of SDRAM, and 8 MB of flash memory. It also has a 100 Mbit/s Ethernet network connection. The NSLU2 is fanless, making it completely silent.
User community
Stock, the device runs a customised version of Linux. Linksys was required to release their source code as per the terms of the GNU General Public License. Due to the availability of source code, the NSLU2's use of well-documented commodity components and its relatively low price, there are several community projects centered around it, including hardware modifications, alternative firmware images, and alternative operating systems with varying degrees of reconfiguration.
Hardware modifications
Unofficial hardware modifications include:
Doubling the clock frequency on underclocked units. As of summer 2006, the NSLU2 was sold without the "underclocking"
Addition of a serial port
Addition of a JTAG port
Enabling extra USB ports
Addition of extra memory
NSLU2 units that have had their memory upgraded are commonly referred to as 'FatSlugs'
Devices have been successfully upgraded to 64 MB but not stable operation with 128 MB and 256 MB of RAM
The version with 256 MB RAM and 16 MB flash (twice the standard amount) has been nicknamed 'ObeseSlug'
Forced Power On
Adding an HD44780 controlled dot matrix display
Alternative firmware
There are two main replacement firmware images available for the device: the first is Unslung which is based on the official Linksys firmware with some improvements and features added. Optware packages are available to expand functionality. The other is SlugOS/BE (formerly OpenSlug), which is based on the OpenEmbedded framework. SlugOS/BE allows users to re-flash the device with a minimal Linux system including an SSH server to allow remote access. Once installed, the operating system must be moved to an attached hard disk due to the lack of space available on the flash memory. Once this has been done, a wide range of additional packages are available to be installed from an Internet repository.
It is also possible to run OpenWrt, Debian, Gentoo, FreeBSD, NetBSD, OpenBSD, and Ubuntu on the device.
The ability to run an unrestricted operating system on the device opens up a whole new range of uses. Some common uses are a web server, mail server, DAAP server (iTunes), XLink Kai, UPnP AV MediaServers, BitTorrent client, FreeSWITCH, asterisk PBX and network router (with the attachment of a USB network interface/USB modem). German programmer Boris Pasternak developed the weather server program/server Meteohub as an inexpensive way to gather weather sensor data from personal weather stations ("PWS") and allow it to be posted on a number of online weather services including Weather Underground, Weatherbug, Citizens Weather Observation Program (CWOP), and many others.
An NSLU2 with Unslung firmware can be interfaced with a Topfield TF5800 personal video recorder (PVR) to allow an electronic programme guide (EPG) to be automatically downloaded from the Internet and transferred to the PVR.
Problems
As with most NASs, the device is not immediately compatible with Windows Vista or 7, as it runs an older version of Samba that uses an authentication mechanism that is disabled by default in later versions of Windows. Ways of enabling the older (and less secure) authentication are available.
The device will not power on automatically when it gets power from an external supply. This might be a problem in an environment where power failures are frequent. Automatic-power-on is possible only with one of several external or internal hardware or wiring modifications.
Awards
The NSLU2 won the "Most Innovative in Networking" Reader Award in the Tom's Hardware 2004 Awards.
Similar Devices
Buffalo LinkStation
SheevaPlug
Belkin Home Base F5L049 (with GPL firmware)
See also
Buffalo network-attached storage series
References
External links
NSLU2 Product Information at Linksys
Linux-based devices
NSLU2-Linux
Server appliance
Computer-related introductions in 2004
Linksys
|
32488579
|
https://en.wikipedia.org/wiki/Hector%20Monsegur
|
Hector Monsegur
|
Hector Xavier Monsegur (born 1983), known also by the online pseudonym Sabu (pronounced Sə'buː, Sæ'buː), is an American computer hacker and co-founder of the hacking group LulzSec. Facing a sentence of 124 years in prison, Monsegur became an informant for the FBI, working with the agency for over ten months to aid them in identifying the other hackers from LulzSec and related groups. LulzSec intervened in the affairs of organizations such as News Corporation, Stratfor, UK and American law enforcement bodies and Irish political party Fine Gael.
Sabu featured prominently in the group's published IRC chats, and claimed to support the "Free Topiary" campaign. The Economist referred to Sabu as one of LulzSec's six core members and their "most expert" hacker.
Identity
Sabu was identified by Backtrace Security as "Hector Monsegur" on March 11, 2011, in a PDF publication named "Namshub".
On June 25, 2011, an anonymous Pastebin post claimed that Sabu was Hector Xavier Monsegur, a man of Puerto Rican origin.
At the time of his arrest, Xavier was a 28-year-old unemployed foster parent of his two female cousins, who were the children of Sabu's incarcerated aunt. Sabu attended, but did not graduate from, Washington Irving High School. He had been living in his late grandmother's apartment in the Riis Houses in New York City.
Arrest and activity as an informant for the FBI
On March 6, 2012, Sabu was revealed to be Hector Xavier Monsegur in a series of articles written by Jana Winter and published by FoxNews.com.
Federal agents arrested Monsegur on June 7, 2011. The following day, Monsegur agreed to become an informant for the FBI and to continue his "Sabu" persona. "Since literally the day he was arrested, the defendant has been cooperating with the government proactively," sometimes staying up all night engaging in conversations with co-conspirators to help the government build cases against them, Assistant U.S. Attorney James Pastore said at a secret bail hearing on August 5, 2011. A few days after that bail hearing, Monsegur entered a guilty plea to 12 criminal charges, including multiple counts of conspiracy to engage in computer hacking, computer hacking in furtherance of fraud, conspiracy to commit access device fraud, conspiracy to commit bank fraud and aggravated identity theft. He faced up to 124 years in prison.
As an informant, Monsegur provided the FBI with details enabling the arrest of five other hackers associated with the groups Anonymous, LulzSec and Antisec. The FBI provided its own servers for the hacking to take place. Information Monsegur provided also resulted in the arrest of two UK hackers: James Jeffery and Ryan Cleary.
The FBI attempted to use Monsegur to entrap Nadim Kobeissi, author of the secure communication software Cryptocat, but without success.
Monsegur maintained his pretense until March 6, 2012, even tweeting his "opposition" to the federal government until the very last. The final day's tweets included, "The feds at this moment are scouring our lives without warrants. Without judges approval. This needs to change. Asap" and "The federal government is run by a bunch of fucking cowards. Don't give in to these people. Fight back. Stay strong". On March 6, 2012, the FBI announced the arrests of five male suspects: two from Britain, two from Ireland and one from the U.S.
Sabu has not been explicitly linked to the group Anonymous. The extent of crossover between the members of such hacktivist groups, however, is uncertain. Anonymous reacted to Sabu's unmasking and betrayal of LulzSec on Twitter, "#Anonymous is a hydra, cut off one head and we grow two back".
Steve Fishman of New York magazine said "On the Internet, Monsegur was now a reviled figure. At Jacob Riis, it was a different story. Those who knew him growing up were shocked—he was always 'respectful,' they said. But also, they were a little proud. In their eyes, he was a kid from the projects who'd achieved a certain success. He'd gotten out, finally."
A court filing made by prosecutors in late May 2014 revealed Monsegur had prevented 300 cyber attacks in the three years since 2011, including planned attacks on NASA, the U.S. military and media companies. "Monsegur's consistent and corroborated historical information, coupled with his substantial proactive cooperation and other evidence developed in the case, contributed directly to the identification, prosecution, and conviction of eight of his major co-conspirators, including Jeremy Hammond, who at the time of his arrest was the FBI's number one cyber-criminal target in the world," a sentencing memo among the documents filed said.
Monsegur served 7 months in prison after his arrest but had been free since then while awaiting sentencing. At his sentencing on May 27, 2014, he was given "time served" for co-operating with the FBI and set free under one year of probation.
References
1983 births
Living people
American computer criminals
Federal Bureau of Investigation informants
Anonymous (hacker group) activists
Washington Irving High School (New York City) alumni
Hacktivists
|
1301846
|
https://en.wikipedia.org/wiki/Gearbox%20Software
|
Gearbox Software
|
Gearbox Software is an American video game development company based in Frisco, Texas. It was established as a limited liability company in February 1999 by five developers formerly of Rebel Boat Rocker. Randy Pitchford, one of the founders, serves as president and chief executive officer. Gearbox initially created expansions for the Valve game Half-Life, then ported that game and others to console platforms. In 2005, Gearbox launched its first independent set of games, Brothers in Arms, on console and mobile devices. It became their flagship franchise and spun off a comic book series, television documentary, books, and action figures. Their second original game series, Borderlands, commenced in 2009, and by 2015 had sold over 26 million copies. The company also owns the intellectual property of Duke Nukem and Homeworld.
Gearbox expanded into publishing with the creation of Gearbox Publishing in 2015. A parent company, The Gearbox Entertainment Company, was established for Gearbox Software and Gearbox Publishing in 2019. Gearbox Entertainment was acquired by the Embracer Group in April 2021, becoming its seventh major label. A third division, Gearbox Studios, to focus on television and film productions, was established in October 2021.
History
Formation and initial growth (1999–2008)
Gearbox Software was founded on February 16, 1999, by Randy Pitchford, Brian Martel, Stephen Bahl, Landon Montgomery and Rob Heironimus, five developers formerly of Rebel Boat Rocker. Before Rebel Boat Rocker, Pitchford and Martel previously worked together at 3D Realms, and Montgomery previously worked at Bethesda Softworks. By 2000, the company employed 15 people.
They started with developing expansions to Valve's Half-Life. Porting Half-Life to console platforms (each with new game content) followed, building the company's experience in console game-making, in addition to enhancing and building upon the successful Counter-Strike branch of the Half-Life franchise. Prior to Half-Life 2, they had developed or helped develop every Half-Life expansion game or port, including Opposing Force, Blue Shift, Counter-Strike: Condition Zero, Half-Life for the Sony PlayStation 2 (including Half-Life: Decay), and Half-Life for the Sega Dreamcast (including Blue Shift). Branching out to other publishers, they pursued additional port work, each game being released with additional content, but this time from console to PC. These projects included their first non-first-person shooter, Tony Hawk's Pro Skater 3, and Halo: Combat Evolved, forging new publisher relationships with Activision and Microsoft Game Studios respectively. Additional new development, in the form of a PC game in the James Bond franchise (James Bond 007: Nightfire) for Electronic Arts, also occurred during the company's initial 5-year period.
In 2005, they launched an original property of their creation, Brothers in Arms, with the release of Brothers in Arms: Road to Hill 30 on the Xbox, PC and PlayStation 2. Later that year a sequel, Brothers in Arms: Earned in Blood, was launched. In 2008, Brothers in Arms: Hell's Highway was released.
2007 brought announcements of new projects based on licensed film intellectual properties, including the crime drama Heat and the science-fiction classic Aliens. In the September 2007 issue of Game Informer, Pitchford stated that development on the Heat game had not yet begun, as the planned development partner for the project had gone under. This was followed by an announcement by Sega that they would be helming a new version of rhythm game Samba de Amigo for the Wii, a departure from their signature first-person shooter titles.
Borderlands and studio expansion (2009–2015)
Work on a new intellectual property, Borderlands, began around 2005 and was first announced in 2007. Pitchford likened the game as a combination of computer role-playing games such as Diablo and NetHack, and first-person shooters like Duke Nukem. Defining features of Borderlands was its cel-shading graphics style and its procedurally-generated loot system that was capable of generating millions of different guns and other gear items. Borderlands was released in October 2009, published by 2K, a subsidiary of Take-Two Interactive. By August 2011, had sold over 4.5 million copies, making it a critical success for Gearbox and allowing them to expand the studio and budgets for subsequent games. Subsequently, Gearbox developed two additional games in the video game series, Borderlands 2 (2012) and Borderlands 3 (2019), as well as the spin-off title Tiny Tina's Wonderlands (2022), and the series has spawn additional games from other studios under 2K/Take-Two or through license, including Borderlands: The Pre-Sequel by 2K Australia, and Tales from the Borderlands from Telltale Games. Gearbox and Take-Two have also partnered with Lionsgate to develop a live-action Borderlands film, which as of June 2021 is still in production.
In July 2013, Gearbox announced plans to rerelease Homeworld and Homeworld 2 in high definition for modern PC platforms, in addition to making it available through digital distributors.
In July 2014, Randy Pitchford formally contested the Aliens: Colonial Marines class action lawsuit stating the game had cost them millions of their own money and the advertising was solely the fault of the publisher.
In December 2015, Gearbox opened a second development studio in Quebec City, Canada. The studio is run by Sebastien Caisse and former Activision art director Pierre-Andre Dery. The team consists of over 100 members and is contributing to the development of original AAA titles.
Restructuring and acquisition by Embracer Group (2015–present)
Gearbox established Gearbox Publishing in 2015, first announced to the public in December 2016, as to publish third-party games, starting with the remastered version of Bulletstorm from People Can Fly. Pitchford said that they wanted to start expanding into other areas of capital growth beyond games that Gearbox was traditionally known for, and planned to use Gearbox Publishing as a starting point. Later, in May 2019, Gearbox established The Gearbox Entertainment Company, Inc. (Gearbox Entertainment) as a parent company for both Gearbox Software and Gearbox Publishing.
Co-founder Landon Montgomery, who had left the company around 2007, died on March 25, 2020.
In April 2021, Gearbox Entertainment was wholly acquired by the Embracer Group for approximately , and was added as the company's seventh major publishing group. Pitchford stated that while they were looking to raise capital from 2016, they came to meet with Embracer, and saw that their decentralized studio model would work well for Gearbox. 2K remained on Gearbox's board and continued to publish the Borderlands series.
Gearbox Entertainment opened a second Canadian studio, Gearbox Studio Montreal, in August 2021, to support 250 new staff, bringing the total size of Gearbox to around 850 employees.
Gearbox announced the formation of Gearbox Studios as a third company under the Gearbox Entertainment Company in October 6, 2021 to oversee television and film productions, with Pitchford serving as Gearbox Studios president alongside as president and CEO of the parent company. Former CTO Steve Jones was named as president of Gearbox Software in Pitchford's place. Embracer announced it intent to acquire Perfect World Entertainment in December 2021 and placing the group, including its publishing arm and Cryptic Studios, under the Gearbox Entertainment operating group.
Company structure
As of December 2021, The Gearbox Entertainment Company, as an operating division of Embracer Group, manages three primary divisions of Gearbox: Gearbox Software which develops video games, Gearbox Entertainment which oversees other media productions based on Gearbox's properties, and Gearbox Publishing that handles publishing of Gearbox and other third-party software. The Gearbox Entertainment Company will also oversee Perfect World Entertainment and its subsidiaries, including Cryptic Studios, following approval of Embracer's acquisition of the company planned by February 2022.
Gearbox Software has two additional studios in addition to their main studios in Frisco, Texas; Gearbox Studios Montreal and Gearbox Studio Québec.
Games
Half-Life
Gearbox has developed a total of six games in the Half-Life series: the expansion packs Opposing Force and Blue Shift; ports of Half-Life for Dreamcast (which included Blue Shift) and Half-Life for PlayStation 2 (which included Half-Life: Decay); they also did a large amount of work on both the retail release of Counter-Strike and the main portion of Counter-Strike: Condition Zero.
Brothers in Arms
During their fourth year, Gearbox began working on their first independently owned game: Brothers in Arms: Road to Hill 30. Developed for PC and Microsoft's Xbox console, and built with the Unreal Engine 2, it was released in March 2005. The sequel, Brothers in Arms: Earned in Blood, followed seven months later. The series was published by Ubisoft, who supported both games with PlayStation 2 versions, and later worked with them to develop Brothers in Arms games for portable systems (mobile phones, PlayStation Portable and Nintendo DS) and the Wii home console.
In 2005, Gearbox licensed the Unreal Engine 3 from Epic Games, to replace the Unreal Engine 2 technology used in previous games, and grew its internal development teams to handle the demands of next-generation technology and content. Brothers in Arms: Hell's Highway was the first new title to be announced, continuing the company's flagship franchise.
Brothers in Arms: Hell's Highway was launched in September 2008. By 2008, the franchise also spun off a comic book series, a two-part television documentary, a line of action figures, and a novelization and non-fiction history book.
Borderlands series
After the completion of Brothers in Arms: Earned in Blood, Gearbox began working on their second original game, Borderlands. Revealed in the September 2007 issue of Game Informer, Borderlands was described as "Mad Max meets Diablo", and its first-person shooter-meets-role-playing gameplay was revealed, along with screenshots of the early art style and the first three playable characters. The gaming press saw the game next at the European GamesCon in 2007, and again at GamesCon and E3 in 2008. In early 2009, it was revealed in PC Gamer magazine that they had changed the graphical style and added the fourth player character. Borderlands was released in 2009.
Following the unexpected success of the first Borderlands, which sold between three to four-and-a-half million copies since release, creative director Mike Neumann stated that there was a chance of a Borderlands 2 being created, adding that the decision "seems like a no-brainer." On August 2, 2011, the game was confirmed and titled as Borderlands 2. The first look at the game was shown at Gamescom 2011, and an extensive preview was included in the September edition of Game Informer magazine, with Borderlands 2 being the cover story. Like the first game, Borderlands 2 was developed by Gearbox Software and published by 2K Games, running on a heavily modified version of Epic Games' Unreal Engine 3. The game was released on September 18, 2012, in North America and was released on September 21, 2012, internationally.
Duke Nukem series
Duke Nukem Forever had been a project with a troubled development history at 3D Realms, who had created the Duke Nukem series, since sometime prior to 2000. Due to financial difficulties in 2009, 3D Realms was forced to downsize and ultimately lay off most of the development staff. Take-Two Interactive sued 3D Realms for failing to deliver Duke Nukem Forever.
Pitchford, who had prior industry relations with many 3D Realms staff including George Broussard, learned that many of the 3D Realms team were still eager to develop Duke Nukem Forever, working out of their homes on what they could. Pitchford negotiated with Take-Two to bring many of the former 3D Realms staff into a new studio called Triptych Games, housed at Gearbox's headquarters, to continue working on Duke Nukem Forever following 3D Realms' closure in 2009. As a result, 3D Realms sold the rights to Duke Nukem and the existing work on Duke Nukem Forever to Gearbox around February 2010. Take-Two and Gearbox subsequently announced in September 2010 that Gearbox would finish production of Duke Nukem Forever. Duke Nukem Forever was released in June 2011, and received negative critical reception on release, with most of the criticism directed towards the unfinished, rushed state of the game. Despite the criticism, the game topped the charts on release and made a profit.
3D Realms had initially sued Gearbox in June 2013 for unpaid royalties over Duke Nukem Forever, but dropped the suit by September 2013, with 3D Realms' founder Scott Miller stating that it was a misunderstanding on their part.
3D Realms was eventually acquired in part by Interceptor Entertainment, and in 2014, Interceptor announced plans to make a new Duke Nukem game, Duke Nukem: Mass Destruction. Gearbox filed suit against 3D Realms and Interceptor based on the fact that Gearbox now owned the rights to the Duke Nukem franchise. The case was settled out of court in August 2015, with 3D Realms and Interceptor acknowledging that Gearbox has full rights to the Duke Nukem series. Following the settlement, Gearbox released Duke Nukem 3D: 20th Anniversary Edition World Tour in September 2016. The game included new levels developed in conjunction with some of the original developers, re-recorded lines by original Duke voice actor Jon St. John, and new music from original composer Lee Jackson. It was released on October 11, 2016.
Aliens: Colonial Marines
Aliens: Colonial Marines was a result of Gearbox's exploration into working on licensed film properties in 2007, and was developed under license from 20th Century Fox, who held the film rights, and Sega, who held the game publishing rights to the franchise. Aliens: Colonial Marines was planned as a first-person shooter, both single-player and multiplayer, with players as members of human squads facing the franchises titular xenomorphs in settings based on the films. Gearbox did initial development on the game, but as the studio starting working on Borderlands and Duke Nukem Forever, they drew developers off Aliens though still collected full payments from Sega. Sega and 2K discovered the discrepancy on Gearbox's allocation of its staff on its projects, which lead to a round of layoffs in 2008.
After Gearbox released Borderlands to critical acclaim in 2009, it began work on its sequel rather than re-allocating developers to Aliens. Instead, the studio outsourced the work to third parties, including Demiurge Studios, Nerve Software, and TimeGate Studios. By 2012, Gearbox took over full development of the game as it neared its planned release in February 2013, but due to the heavily outsourced process, the game's state was haphazard, forcing Gearbox to cancel a planned beta period and rush the game through the final stages of production, certification, and distribution. On release, the game suffered from performance issues even on target hardware specifications, and shipped with a software bug that hampered the artificial intelligence of the xenomorphs in the game, making the game far less challenging than promised; it was discovered in 2019 that this bug was result of a typographic error in a configuration file shipped with the game. The game's poor performance led Sega to cancel planned releases for the Wii U.
A class action lawsuit filed in April 2013 by Roger Damion Perrine and John Locke alleged that Gearbox and Sega falsely advertised Aliens: Colonial Marines by showing demos at trade shows that did not accurately represent the final product. Sega and the plaintiffs reached a settlement in late 2014, wherein Sega agreed to pay $1.25 million to the class. The plaintiffs dropped Gearbox from the suit in May 2015.
Battleborn
Released in May 2016, Battleborn was a cooperative first-person shooter video game with multiplayer online battle arena (MOBA) elements. Battleborn takes place in a space fantasy setting where multiple races contest possession of the universe's last star. Players select one of multiple pre-defined heroes, customized with passive abilities gained through end-of-mission loot, to complete both player-vs-player and player-vs-environment events. During such events, characters are leveled up through their "Helix tree", granting one of two abilities at each level. While Battleborn was well received by critics, it was released within a month of Blizzard Entertainment's Overwatch, a hero shooter with similar concepts, and which quickly overshadowed Battleborn. The title went free-to-play in June 2017 and was shut down in January 2021.
Homeworld series
After 10 years without any new releases to the series, Gearbox acquired the rights to the Homeworld series from THQ in 2013. Shortly after that the Homeworld Remastered Collection was released in 2015, containing updated High-Definition versions of Homeworld and Homeworld 2 compatible with modern Windows and Mac OS X systems.
In September 2013, Gearbox announced a partnership with Blackbird Interactive and licensing the Homeworld-IP for their then-named Hardware: Shipbreakers game. This game later became Homeworld: Deserts of Kharak and was released on January 20, 2016 as a prequel to the original Homeworld game of 1999.
On August 30, 2019, Gearbox announced Homeworld 3 which will again be developed by Blackbird Interactive. The game's development is at least partially funded through a crowdfunding campaign on the Fig platform. It is currently planned to release the game in Q4 of 2022.
Other media
A Borderlands film has been in development with Gearbox and Lionsgate since around 2015, with Eli Roth set to direct. As of June 2021, the film is done filming.
In April 2020, Gearbox announced it was developing a television series based on its Brothers in Arms series.
Technology
In 2006, they partnered with Dell and Intel to provide development computer systems and technology for their studio.
In June 2007, they purchased a Moven motion capture system that uses non-optical inertia technology, to augment their existing Vicon optical motion capture system becoming one of the few independent developers with two in-house motion capture capabilities.
In February 2008, it was announced that they had licensed NaturalMotion's Morpheme software.
List of video games
Games developed
Games published
References
External links
1999 establishments in Texas
American companies established in 1999
Companies based in Frisco, Texas
Embracer Group
Video game companies based in Texas
Video game companies established in 1999
Video game companies of the United States
Video game development companies
2021 mergers and acquisitions
American subsidiaries of foreign companies
|
31287560
|
https://en.wikipedia.org/wiki/DioneOS
|
DioneOS
|
DioneOS (pronounced /djoneos/) is a multitasking preemptive, real-time operating system (RTOS). The system is designed for microcontrollers, originally released on 2 February 2011 for the Texas Instruments TI MSP430x, and then on 29 March 2013 for the ARM Cortex-M3. Target microcontroller platforms have limited resources, i.e., system clock frequency of tens of MHz, and memory amounts of tens to a few hundred kilobytes (KB). The RTOS is adapted to such conditions by providing a compact and efficient image. The efficiency term here means minimizing further central processing unit (CPU) load caused by system use. According to this definition, the system is more effective when it consumes less CPU time to execute its internal parts, e.g., managing threads.
The DioneOS system is intended for autonomic devices where user interface has limited functions. The core functions provided by the system is an environment for building multitasking firmware by means of standard, well known concepts (e.g. semaphores, timers, etc.). Because of the target domain of application, the system uses a command-line interface and has no graphical user interface.
Memory model
Texas Instruments company manufactures a wide range of microcontrollers that use the MSP430 core. Depending on the version, the processor contains different amount of flash memory and random-access memory (RAM), e.g., MSP430f2201 has 1KB/128B correspondingly, but MSP430f5438 has 256KB/16KB. When the size of the memory exceeds 64 KB limit, as happens when the memory cannot fit in a range 0–64 KB, 16-bit addressing is insufficient. Due to this constraint, chips with larger memory are equipped with extended core (MSP430x). This version of the processor has wider registers (20-bit) and new instructions for processing them.
At compiling, the programmer selects the type of memory model (near or far) that is used for and memories. This choice determines accessible memory range, hence when the above 64 KB limit is programmed, the far model must be used.
The DioneOS supports the far model for code modules, so large firmware that uses extended can be developed and run under the system's control. The system uses the near memory model for data segments.
Thread management
The firmware started under the DioneOS system consists of threads that are executed in pseudo-parallel way. Each thread has its own, unique priority used for ordering the threads, from most important to least. The thread priority value defines a precedence for running over others.
In the DioneOS system the thread can be in one of following states:
Running - the thread is currently executed by processor,
Ready - the thread is ready to be run,
Waiting - the thread is blocked and waits on some synchronization object.
Because there is only one core in the processor, only one thread can be in Running state. This is the thread that has the highest priority from all threads that are not in Waiting state. Change of the thread state can be caused by:
triggering an object, that hold the thread,
unsuccessful acquiring the object that is already locked (e.g. a mutex that is owned by someone else),
elapsing timeout,
state change of another thread, that may lead to preemption.
The system handles up to 16 threads, including idle one with the lowest priority. The idle thread should be always ready to be run, and never switched to Waiting state, so it is not permitted to call any functions that would block from inside this thread. The idle thread can be used to determine total system load.
Features of the system
The DioneOS system provides:
items for synchronisation: mutexes and counting semaphores, used for thread synchronization, signalling from ISR to a thread and guarding shared resources,
methods for time management: timers, thread sleeping, timeouts,
communications items implemented by events and queues available as circular buffers,
memory management by memory pool that allocates memory only in fixed-size blocks but is free of fragmentation issues that may appear when heap is used. Regular allocation by malloc/free on heap is also available, it is provided by standard C libraries.
testing support objects: signaling events on chip pins, critical exceptions, objects marking that helps is detection of errors like use of deleted object or double memory deallocation, etc.
Context switch
As it was stated in the 'Threads Management' chapter, the firmware consists of pseudo-parallel threads. Each thread has its own context, that contains core registers of the processor, last execution address and private stack. During the switch between threads the system saves the context of stopped thread and recovers the context of the one being run. This state saving makes possible breaking the thread execution and further continuation, even if between them other thread has been executed. Note that preemption followed by context switch may happen in any moment, even if no system function is called in the thread. Although it may happen in unexpected location in the executed code, the thread work is not distorted due to the system and the context saving. From the thread point of view, the switch can be done in background.
The context switch is critical operation in the system and the time of its execution determines if how effective the system is. Because of that the context switch in the DioneOS system was optimized for short time. The most important parts were written in assembler, so the switch can be done in 12–17 μs (for fosc=25 MHz).
In the DioneOS system the context switch can be initiated from interrupt handler (interrupt service routine). This property is useful for moving an event handling to the thread and commonly implemented in two-layer architecture:
the interrupt handler - is called after hardware interrupt has occurred. In this part interrupts are disabled, so execution cannot be continued for long time, otherwise the system responsiveness is compromised. In this layer only jobs that require fast response for interrupt should be processed, any others should be passed to higher layer,
higher layer - processing in separated thread without blocking interrupts; this thread can be preempted. Constraints are not so tight here as in the interrupt handler. The code execution does not block the system.
The context switch measured from signaling point in ISR to other thread recovery takes 10us (for fosc=25 MHz) in the DioneOS system.
Configuration
The DioneOS has multiple configuration options that affects features inserted in the compiled image of the system. A lot of them are source code switches that are gathered in configuration file and can be altered by a developer of firmware. By this means it is possible to control additional testing parts. If they are enabled the system is built in a version that provides more detection of unusual conditions and run-time information that helps in debugging process. When the errors are found and eliminated these extra features can be disabled for having full performance of the system.
Example of a fragment of configuration file:
[...]
#define CFG_CHECK_OVERFLOW /* overflow testing in semaphores/mutexes */
#define CFG_CHECK_LOCK /* lock issue detection caused by preemption conditions during scheduler lock */
#define CFG_LISTDEL_WITH_POISON /* marking deleted items on the list in os_list1_del()*/
#define CFG_MEM_POOL_POISON_FILL 0xDAAB /* pattern for marking de-allocated memory items */
#define CFG_LISTDEL_POISON 0xABBA /* pattern for marking removed list items */
#define CFG_CHECK_EMPTY_SEM_DESTROY /* testing semaphore before destroy in os_sleep()*/
#define CFG_FILL_EMPTY_MEM_POOL /* free memory fill with pattern */
[...]
References
External links
Real-time operating systems
Embedded operating systems
|
57374558
|
https://en.wikipedia.org/wiki/Net%20neutrality%20by%20country
|
Net neutrality by country
|
Net neutrality is the principle that governments should mandate Internet service providers to treat all data on the Internet the same, and not discriminate or charge differently by user, content, website, platform, application, type of attached equipment, or method of communication. For instance, under these principles, internet service providers are unable to intentionally block, slow down or charge money for specific websites and online content.
Summary
By country
Argentina
The Law 27,078, of 2014, under the Article 56 establishes the right of users to access, use, send, receive or offer any content, application, service or protocol through the Internet without any restriction, discrimination, distinction or blocking. Article 57 forbids “ICT service providers” from blocking, interfering, or restricting any content, application, service, or protocol; price discrimination by virtue of its contents. Article 57 also establishes an exception allowing blocking or restrictions solely under a judicial order or by the user of the service.
Since 2017, mobile telephone carriers like Claro, Movistar and Personal have been offering free traffic for WhatsApp messages, voice recordings, attached videos and pictures.
Belgium
In Belgium, net neutrality was discussed in the parliament in June 2011. Three parties (CD&V, N-VA and PS) jointly proposed a text to introduce the concept of net neutrality in the telecom law.
Brazil
In 2014, the Brazilian government passed a law which expressly upholds net neutrality, "guaranteeing equal access to the Internet and protecting the privacy of its users in the wake of U.S. spying revelations".
The Brazilian Civil Rights Framework for the Internet (in , officially Law No 12.965) became law on 23 April 2014 at the Global Multistakeholder Meeting on the Future of Internet Governance. It governs the use of the Internet in Brazil, through forecasting principles, guarantees, rights and duties to those who use the network as well as the determination of guidelines for state action. The legislation was used as basis to block the popular WhatsApp application in Brazilian territory, a decision lifted soon afterwards, experts claiming that it was, in actuality, against the Framework, which was misinterpreted by the judiciary.
Canada
In a January 25, 2011 decision, the Canadian Radio-television and Telecommunications Commission (CRTC) ruled that usage-based billing could be introduced. Prime Minister Harper signaled that the government may be looking into the ruling: "We're very concerned about CRTC's decision on usage-based billing and its impact on consumers. I've asked for a review of the decision." Some have suggested that the ruling adversely affects net neutrality, since it discriminates against media that is larger in size, such as audio and video.
In 2005, Canada's second-largest telecommunications company, Telus, began blocking access to a server that hosted a website supporting a labour strike against the company.
Chile
On 13 June 2010, the National Congress of Chile amended the country's telecommunications law in order to preserve network neutrality, becoming the first country in the world to do so. This came after an intensive campaign on blogs, Twitter, and other social networks. The law, published on 26 August 2010, added three articles to the General Law of Telecommunications, forbidding ISPs from arbitrarily blocking, interfering with, discriminating, hindering or restricting an Internet user's right to use, send, receive or offer any legal content, application, service or any other type of legal activity or use through the Internet. ISPs must offer Internet access in which content is not arbitrarily treated differently based on its source or ownership.
China
The People's Republic of China's (PRC) approach to internet policy does not account for Net Neutrality as the government uses ISPs to inspect and regulate the content that is available to their citizens. They typically block both foreign and domestic sites that the government wishes to censor in their country, using software and hardware that together are known as the "Great Firewall". Many of the sites that are on the Great Firewall's blacklist are there because they provide information that the government cannot effectively alter permanently, such as large social media IPs or information sites such as Wikipedia.
According to Thomas Lum, a specialist in Asian Affairs: "Since its founding in 1949, the PRC has exerted great effort in manipulating the flow of information and prohibiting the dissemination of viewpoints that criticize the government or stray from the official Communist party view. The introduction of Internet technology in the mid-1990s presented a challenge to government control over news sources, and by extension, over public opinion. While the Internet has developed rapidly, broadened access to news, and facilitated mass communications in China, many forms of expression online, as in other mass media, are still significantly stifled. Empirical studies have found that China has one of the most sophisticated content-filtering Internet regimes in the world. The Chinese government employs increasingly sophisticated methods to limit content online, including a combination of legal regulation, surveillance, and punishment to promote self-censorship, as well as technical controls."
European Union
When the European Commission consulted on the EU's 2002 regulatory framework for electronic communications in November 2007, it examined the possible need for legislation to mandate network neutrality, countering the potential damage, if any, caused by non-neutral broadband access. The European Commission stated that prioritisation "is generally considered to be beneficial for the market so long as users have choice to access the transmission capabilities and the services they want" and "consequently, the current EU rules allow operators to offer different services to different customers groups, but not allow those who are in a dominant position to discriminate in an anti-competitive manner between customers in similar circumstances". However, the European Commission highlighted that Europe's current legal framework cannot effectively prevent network operators from degrading their customers' services. Therefore, the European Commission proposed that it should be empowered to impose a minimum quality of services requirements. In addition, an obligation of transparency was proposed to limit network operators' ability to set up restrictions on end-users' choice of lawful content and applications.
On 19 December 2009, the so-called "Telecoms Package" came into force and EU member states were required to implement the Directive by May 2011. According to the European Commission the new transparency requirements in the Telecoms Package would mean that "consumers will be informed—even before signing a contract—about the nature of the service to which they are subscribing, including traffic management techniques and their impact on service quality, as well as any other limitations (such as bandwidth caps or available connection speed)". Regulation (EC) No 1211/2009 of the European Parliament and of the Council of 25 November 2009 established the Body of European Regulators for Electronic Communications (BEREC) and the Office Body of European Regulators of Electronic Communications. BEREC's main purpose is to promote cooperation between national regulatory authorities, ensuring a consistent application of the EU regulatory framework for electronic communications.
The European Parliament voted the EU Commission's September 2013 proposal on its first reading in April 2014 and the Council adopted a mandate to negotiate in March 2015. Following the adoption of the Digital Single Market Strategy by the Commission on 6 May, Heads of State and Government agreed on the need to strengthen the EU telecoms single market. After 18 months of negotiations, the European Parliament, Council and Commission reached two agreements on the end to roaming charges and on the first EU-wide rules on net neutrality on 30 June 2015, to be completed by an overhaul of EU telecoms rules in 2016. Specifically, article 3 of EU Regulation 2015/2120 sets the basic framework for ensuring net neutrality across the entire European Union. However, the regulation's text has been criticized as offering loopholes that can undermine the regulation's effectiveness. Some EU member states, such as Slovenia and the Netherlands, have stronger net neutrality laws.
Article 3 of EU Regulation 2015/2120 sets the basic framework for ensuring net neutrality across the entire European Union. However, the regulation's text has been criticized as offering loopholes that can undermine the regulation's effectiveness. In Germany mobile device ISP's like Deutsche Telekom and Vodafone are offering services that might be seen as undermining net neutrality.
The government agency overseeing the market (Bundesnetzagentur) stated, in general these plans are in alignment with net neutrality but forced the companies to adapt some changes.
France
In France, on 12 April 2011, the Commission for economic affairs of the French parliament approved the report of MP Laure de La Raudière (UMP). The report contains 9 proposals. Propositions n°1 & 2 act on net neutrality.
Indonesia
Indihome, a subsidiary of Telkom Indonesia, is deliberately blocking Netflix and claimed that it is due to censorship and pornographic contents. On the other hand, it promotes Iflix, a Malaysian-based company that provides similar service as Netflix. Ironically the M-17 rated contents are also available on Iflix without further censorship from the provider.
India
On 8 February 2016, the Telecom Regulatory Authority of India (TRAI) banned differential pricing of data services. As per TRAI's press release, the regulator had multiple responses soliciting different opinions with respect to its consultation paper. Considering all the responses, the regulator decided to have an ex ante regulation instead of a case by case tariff investigation regime. According to the TRAI this decision was reached in order to give the industry participants the much needed certainty and in view of the high costs of regulation in terms of time and resources that will be required for investigating each case of tariff discrimination. Ruling prohibits any service provider from offering or charging discriminatory tariffs for data services on the basis of content and also prohibits any agreement or contract which might have effect of discriminatory tariffs for data services or may assist the service provider in any manner to evade the regulation. It also specifies financial disincentives for contravention of regulation. However, the ruling does not prescribe a blanket ban on differential pricing and provides an exception in case of public emergency or for providing emergency services. Discriminatory tariffs are allowed in the case of an emergency. Lastly, according to TRAI this ruling should not be considered the end of the net neutrality debate. The regulator has promised to keep a close view on the developments in the market and may undertake a review after two years or at an earlier date, as it may deem fit.
In March 2015, the TRAI released a formal consultation paper on Regulatory Framework for Over-the-top (OTT) services, seeking comments from the public. The consultation paper was criticised for being one-sided and having confusing statements. It was condemned by various politicians and Internet users. By 24 April 2015, over a million emails had been sent to TRAI demanding net neutrality. The consultation period ended on 7 January 2016.
Ultimately, in the year 2018, the Indian Government unanimously approved new regulations supporting net neutrality. The regulations are considered to be the "world's strongest" net neutrality rules, guaranteeing free and open internet for nearly half a billion people, and are expected to help the culture of startups and innovation. The only exceptions to the rules are new and emerging services like autonomous driving and tele-medicine, which may require prioritised internet lanes and faster than normal speeds.
Violations of net neutrality have been common in India. Examples beyond Facebook's Internet.org include Aircel's Wikipedia Zero along with Aircel's free access to Facebook and WhatsApp, Airtel's free access to Google, and Reliance's free access to Twitter.
Facebook's Free Basics program is seen by activists as a net neutrality violation, based on its provision of free-of-cost access to dozens of sites, in collaboration with telecom operators. There were protests online and on ground against the Free Basics program. The Free Software Movement of India also held a protest in Hyderabad and parts of Telangana and Andhra Pradesh.
Israel
In 2011, Israel's parliament passed a law requiring net neutrality in mobile broadband. These requirements were extended to wireline providers in an amendment to the law passed on 10 February 2014. The law contains an exception for reasonable network management, and is vague on a number of issues such as data caps, tiered pricing, paid prioritization and paid peering.
Italy
Since March 2009 in Italy, there is a bill called:
Proposta di legge dei senatori Vincenzo VITA (PD) e Luigi Vimercati (PD)
"Neutralità Delle Reti, Free Software E Societa' Dell'informazione". Senator Vimercati in an interview said that he wants "to do something for the network neutrality" and that he was inspired by Lawrence Lessig, Professor at the Stanford Law School. Vimercati said that the topic is very hard, but in the article 3 there is a reference to the concept of neutrality regard the contents. It is also a problem of transparency and for the mobile connections: we need the minimum bandwidth to guarantee the service. We need some principle to defend the consumers. It's important that the consumer has been informed if he could not access all the Internet.
The bill refuses all the discrimination: related by the content, the service and the device. The bill is generally about Internet ("a statute for the Internet") and treat different topics like network neutrality, free software, giving an Internet access to everyone.
Japan
Net neutrality in the common carrier sense has been instantiated into law in many countries, including Japan. In Japan, the nation's largest phone company, Nippon Telegraph and Telephone, operates a service called Flet's Square over their FTTH high speed Internet connections.
Netherlands
In June 2011, the majority of the Dutch lower house voted for new net neutrality laws which prohibits the blocking of Internet services, usage of deep packet inspection to track customer behaviour and otherwise filtering or manipulating network traffic. The legislation applies to any telecommunications provider and was formally ratified by the Dutch senate on 8 May 2012.
Portugal
As part of the European Union, Portugal is bound to the laws protecting net neutrality established by the EU in 2002. However, the Portuguese government still allows for certain kinds of pricing models which are banned under most net neutrality rules. They allow for broadband providers to offer special pricing packages in which customers can pay for extra data that is only designated for the use of specific websites. For example, one package allows customers to pay extra for more data that can be used for social media websites such as Facebook and Twitter. However, many supporters of net neutrality in Portugal have objected to this pricing model on the grounds that it creates another barrier to entry for all internet companies that are not included in the special data packages. These kinds of pricing packages are not specifically addressed in the EU net neutrality rules, so they have been allowed to continue. However, on 28 February 2018, Anacom, the telecommunications regulatory agency in Portugal, accused the country's main broadband providers, MEO, NOS, and Vodafone, of violating the EU rules on net neutrality with their extra data packages. They granted the providers up to forty days to change their pricing packages. However, the law does not specify what sanctions are appropriate, leading to an unclear future in this ongoing battle.
On 4 June 2012, the Netherlands became the first country in Europe and the second in the world, after Chile, to enact a network neutrality law. The main provision of the law requires that "Providers of public electronic communication networks used to provide Internet access services as well as providers of Internet access services will not hinder or slow down services or applications on the Internet".
European Union struck down roaming charges by creating a law in which companies cannot slow down services. There are exceptions to services being slowed down which include court order, security, or congestion on a website. Because Portugal is a member of the European Union, it must follow all guidelines set by their Body of European Regulators for Electronic Communication (BEREC). Anacom reported that majority of the complaints it received in the first half of 2018 involving the communications sector were related to billing, service failure, and cancellation of service.
Russia
After almost four years of discussion, in early 2016 Federal Antimonopoly Service approved a regulation blocking ISPs from throttling or otherwise blocking any websites apart from those blocked at the request of the Federal Service for Supervision of Communications, Information Technology and Mass Media, thus protecting net neutrality in Russia.
In September 2007, the Russian government's Resolution No 575 introduced regulation rules of telematics services. Network operators (ISPs) could legally limit individual actions of the subscriber's network activity, if such actions threatened the normal functioning of the network. ISPs were obliged to exclude the possibility of access to information systems, network addresses, or uniform pointers which a subscriber informs the operator of communication in the form specified in the contract. The subscriber was obliged to take actions to protect the subscriber terminal from the impact of malicious software and to prevent the spread of spam and malicious software to its subscriber terminal. In reality, most Russian ISPs shaped the traffic of P2P protocols (like BitTorrent) with lower priority (P2P was about of 80% of traffic there). Also, there was a popular method, called retracker, for redirecting some BitTorrent traffic to the ISP's cache servers and other subscribers inside of a metropolitan area network (MAN). Access to MANs is usually with greater speed (2x–1000x or more, specified in the contract) and better quality than the rest of the Internet.
Singapore
In 2014 and 2015, there were efforts to charge over-the-top content (OTT) providers (companies that provide streaming video). Infocomm Development Authority (IDA) has a Policy Framework for net neutrality that did not allow a surcharge. Consumers also argued that they already pay for their service and that they should not have to pay more to access the sites they want to.
Slovenia
At the end of 2012, Slovenia legislated a law of electronic communication implementing a strong principle of net neutrality. Slovenia thus became the second country in Europe to enact a net neutrality law. The Government Agency for Communications, Networks and Services (AKOS) is enforcing the law and executes inspections. In January 2015 it found zero-rating infringements at the two largest mobile network providers, Telekom Slovenije and Si.mobil (now A1), which were respectively "zero-rating the Deezer music service and the 'Hangar mapa' cloud storage service." In response, AKOS banned zero rating for all services except three owned by the state incumbent. For this, AKOS was sued by Slovenia's telecom operators for violating their own net neutrality rules. A month later the agency found similar infringements at Amis (now Simobil) and Tušmobil (now Telemach). In July 2016 the Administrative Court of the Republic of Slovenia annulled the January 2015 AKOS decisions regarding price discrimination, stating that since it does not "restrict, delay or slow down Internet traffic at the level of individual services or applications" it does not violate net neutrality. The court also said that the Slovenian Electronic Communications Act "does not prohibit zero rating outright." This ruling was in accordance with the Competition Protection Agency (CPA), who felt that the "prohibition of zero-rated services may have been detrimental rather than beneficial for consumers." Four months after the ruling of the Administrative Court, in November 2016, AKOS found Telekom Slovenije and Si.mobil in violation of net neutrality laws for discriminating against non-zero-rated traffic for customers who exceeded their monthly data limits.
South Africa
, there is no law on net neutrality in South Africa. A White Paper was to be published by the South African government in March 2015, but it has not been published yet. However, the telecommunications regulator ICASA, and the Department of Telecommunications and Postal Services (DTPS) has been engaged in this debate. In March 2014, ICASA invited comments to its "Notice of Public Inquiry into the State of Competition in the Information and Communications Technology Sector", in which net neutrality was brought up, and comments were invited on the stakeholders' views on enforcement of net neutrality in South Africa.
Simultaneously, DTPS was in the process of providing an integrated ICT policy review, to provide recommendations on various issues of ICT policy in South Africa. They published a Green Paper and invited comments to the same. The Green Paper did not venture into the debate of net neutrality in detail and simply stated that it is an issue that must be taken into consideration. Following the Green Paper, a Discussion Paper was published in November 2014, which also invited comments. Lastly, a Final Report was published in June 2015 by DTPS providing its policy recommendations. DTPS recommended that the broad tenets of net neutrality be adopted, with principles such as transparency, no blocking of lawful content, and no unreasonable discrimination in mind. They urged the government to set appropriate exceptions to the application of network neutrality principles, such as emergency services, blocking of unlawful content, etc.
South Korea
In South Korea, VoIP is blocked on high-speed FTTH networks except where the network operator is the service provider.
United Kingdom
Net neutrality legislation applies in the United Kingdom as a result of the European Union's adoption of net neutrality legislation in 2015.
In comparison to the United States, the debate concerning Net Neutrality is one that has not received much attention in the United Kingdom. The officials merely refer to such a concept as an open internet, as net neutrality is a term used originally in American politics. While it does seem to be a non-issue in the UK, there is indeed a defining characteristic in the neutrality debate there, as the arguments are often shaped by regulators. Also, these arguments are often influenced by the discourse of other countries in Europe, so much of the discussions that the UK has about open internet will be linked with those of other European countries listed on this page.
In 2007, Plusnet was using deep packet inspection to implement limits and differential charges for peer-to-peer, file transfer protocol, and online game traffic. However, their network management philosophy was made clear for each package they sold, and was consistent between different websites.
In 2021 Ofcom, the country's communications regulator, announced a review of net neutrality. The UK's departure from the European Union in 2020 and issues associated with the COVID-19 pandemic in the United Kingdom have allowed network owners to make a case for change.
United States
Within the United States, regulation of Internet services falls under the Federal Communications Commission (FCC), a five-member panel appointed by the current President. Net neutrality generally falls along political party lines, with Democrats favoring the liberal principles of net neutrality, and Republicans against it, and as such, its treatment has varied with changing political climate in the current administration.
A key facet of the FCC's oversight and net neutrality is how Internet service is defined within the scope of the Communications Act of 1934, either under Title I of the Act as "information services" or under Title II as "common carrier services". If treated as a common carrier, then Internet service would be subject to regulation by the FCC, allowing the FCC to specify and enforce net neutrality principles, while if considered an information service, the FCC would have far less scrutiny over Internet services and work against the principles of net neutrality.
The FCC initially adopted policies favorable to net neutrality in 2005. Finding some service providers blocking access to some sites, the FCC issued the FCC Open Internet Order 2010 that specified six principles of net neutrality. Carriers sued the FCC over these rules, and in the case Verizon Communications Inc. v. FCC in 2014, the courts ruled that the FCC could not regulate service provides without classifying them as common carriers. The FCC subsequently issued the 2015 Open Internet Order, which classified Internet service providers as Title II common carriers, and thus allowing them to issue net neutrality principles. The 2015 rule, both in the reclassification under Title II, and the net neutrality principles, was upheld in the courts in the case United States Telecom Ass'n v. FCC heard in 2016.
With the change of administration from the Democratic Barack Obama to Republican Donald Trump in 2017, Ajit Pai, former Associate General Counsel for Verizon Communications, was appointed commissioner of the FCC. Pai, a vocal opponent of net neutrality, sought to rollback the 2015 Open Internet Order, effectively reclassifying Internet services as a Title I information service and loosing any FCC regulations on these services. Despite heavy public protest against this change, the FCC issued the rollback in December 2017. Additionally, the rollback rule stated that neither state nor local governments could override the FCC's ruling. Twenty-three states and several tech companies sued the FCC in Mozilla v. FCC (2018). The courts have ruled in October 2019 that while the FCC has the right to reclassify Internet service as Title I, they cannot prevent states or local governments from enforcing stricter regulations.
This has caused the concerns of net neutrality in the United States to fall to the states, several which had passed or had pending legislation to enforce net neutrality. Notably, California had passed their own version of net neutrality shortly after the FCC's rollback. Additionally, efforts have been made in the United States Congress to pass legislation that would define Internet services under Title II and/or supporting the principles of net neutrality, though these bills have tended to fail due to partisan politics.
See also
Digital rights
Net neutrality
Net neutrality law
References
Human rights by country
Law by country
|
5838930
|
https://en.wikipedia.org/wiki/Bruce%20Matthews%20%28American%20football%29
|
Bruce Matthews (American football)
|
Bruce Rankin Matthews (born August 8, 1961) is an American former professional football player who was an offensive lineman in the National Football League (NFL) for 19 seasons, from 1983 to 2001. He spent his entire career playing for the Houston / Tennessee Oilers / Titans franchise. Highly versatile, throughout his NFL career he played every position on the offensive line, starting in 99 games as a left guard, 87 as a center, 67 as a right guard, 22 as a right tackle, 17 as a left tackle, and was the long snapper on field goals, PATs, and punts. Having never missed a game due to injury, his 293 NFL games started is the third most of all time, behind quarterbacks Brett Favre and Tom Brady.
Matthews played college football for the University of Southern California, where he was recognized as a consensus All-American for the USC Trojans football team as a senior. He was selected in the first round of the 1983 NFL Draft by the Oilers. He was a 14-time Pro Bowl selection, tied for the second-most in NFL history, and a nine-time first-team All-Pro. Matthews was inducted into the Pro Football Hall of Fame in 2007, and his number 74 jersey is retired by the Titans.
After retiring as a player, Matthews served as an assistant coach for the Houston Texans and Titans. A member of the Matthews family of football players, he is the brother of linebacker Clay Matthews Jr.; father of center Kevin Matthews and tackle Jake Matthews; and uncle of linebacker Clay Matthews III and linebacker Casey Matthews.
Early years
Bruce Rankin Matthews was born in Raleigh, North Carolina, to Clay Matthews Sr. and Daisy Matthews. His father was a defensive lineman for the San Francisco 49ers in the 1950s. His family moved to Arcadia, California, when he was young. Bruce played football at Arcadia High School in Arcadia, California. He was an immediate football standout on the offensive and defensive line, along with doing well in high school wrestling. As a junior in 1977, he was named to the All-California Interscholastic Federation third team, and as a senior Matthews played in the Shrine All-Star Football Classic alongside John Elway. Arcadia High later retired his No. 72 jersey.
College career
Matthews attended the University of Southern California, where he played all offensive line positions at various times for the USC Trojans football team. As a senior in 1982 he was shifted from weakside to strongside guard to replace departing Roy Foster as the principal blocker in the "Student Body Right" play. He was named to the first-team All-Pacific-10 Conference team after his junior and senior seasons. As a senior, he earned consensus All-America honors and won the Morris Trophy, which is awarded to the best lineman in the conference.
Professional career
Matthews is considered to be one of the most versatile offensive linemen to play in the NFL. He started in 99 games as a left guard, 67 as a right guard, 87 as a center, 22 as a right tackle, 17 as a left tackle, and was the snapper on field goals, PATs, and punts. He was selected to 14 Pro Bowls, which at the time tied a league record set by Merlin Olsen. Matthews was also named a first-team All-Pro nine times and an All-American Football Conference selection 12 times. An extremely durable player, Matthews retired after the 2001 season having played more games (296) than any NFL player, excluding kickers and punters, and played in more seasons (19) than any offensive lineman. He never missed a game due to injury, and started 229 consecutive games. Matthews is the only player who played against the Baltimore Colts in their last game at Memorial Stadium in 1983 and against the Baltimore Ravens in their last game at Memorial Stadium in 1997.
1983–1986: Guard, center, and tackle
The Houston Oilers drafted Matthews with the ninth overall pick in the first round of the 1983 NFL Draft. During his first two seasons, he blocked for future Hall of Fame running back Earl Campbell. As a rookie he played guard and was named to the PFWA All-Rookie Team. Before his second season Matthews was moved from right guard to center, snapping to rookie quarterback Warren Moon, but due to injuries on the offensive line he played multiple positions that season; at one point he played center, guard, and tackle in successive weeks. In 1985 and 1986, Matthews alternated between right and left tackle.
1987–1990: Right guard
Matthews sat out the first eight games of the 1987 season due to a contract dispute. When he returned, he was moved back to right guard. He remained at the right guard position in 1988, 1989, and 1990, and was invited to the Pro Bowl each season. He also earned first-team All-Pro recognition each year from the Associated Press (AP), Pro Football Weekly, and The Sporting News. Matthews thrived in the run and shoot offensive scheme adopted by the Oilers around this time, which required linemen to be exceptionally agile. The holes he opened up helped running back Mike Rozier to consecutive Pro Bowls in 1987 and 1988.
1991–1994: Center
The Oilers placed Matthews at center for the final game of the 1990 season in an effort to bolster the team's running game. Of the move, Matthews said, "I'd like to stay at guard, but forces greater than myself make these adjustments." Behind blocking by Matthews and fellow future Hall of Fame guard Mike Munchak, Oilers quarterback Warren Moon led the league in passing yards in 1990 and 1991, and running back Lorenzo White was a 1992 Pro Bowl selection. Matthews remained the team's center through the 1994 season, being named to the Pro Bowl each year.
1995–2001: Left guard
Prior to the 1995 season, Matthews signed a four-year, $10.3 million contract extension with the Oilers. That year, the Oilers signed free agent center Mark Stepnoski, and as a result Matthews moved to left guard. He spent the majority of the rest of his career at the position, occasionally filling in for injured players along the offensive line. During this time, the Oilers left Houston for Tennessee after the 1996 season. His blocking helped running back Eddie George to four straight Pro Bowl seasons. In 1999, at age 37, Matthews signed another four-year contract to remain with the Oilers. That season, the Oilers rebranded as the Tennessee Titans. The team won 13 games, plus three more in the playoffs before losing to the St. Louis Rams in Super Bowl XXXIV. Matthews retired from football prior to the 2002 season at age 40.
Coaching
Houston Texans
On February 27, 2009, Matthews returned to Houston where he was signed on as an offensive assistant with the Houston Texans after volunteer coaching at his children's high school, Elkins High School.
Tennessee Titans
On February 9, 2011, Matthews was hired as offensive line coach by new Tennessee Titans head coach Mike Munchak. Both were Hall of Fame linemen for the Houston Oilers. Regarding his new job, Matthews stated, "For me this is an opportunity of a lifetime. It is such a unique opportunity to work with Mike because I think he will do a great job. It is just one of those things I couldn't pass up."
After finishing the 2013 season with a 7–9 record, Titans general manager Ruston Webster and Tommy Smith met with Munchak and gave him the option to fire a large contingent of assistant coaches, which included Matthews, in exchange for an extension and a raise, or lose his job as head coach. Munchak was not willing to fire everyone they were ordering him to fire, so Munchak parted ways with the Titans, along with Matthews and the other assistant coaches they wanted him to fire.
Honors and legacy
In his first year of eligibility, Matthews was elected to the Pro Football Hall of Fame as part of the class of 2007. He was inducted during the Enshrinement Ceremony on August 5, 2007, with the unveiling of his bust, sculpted by Scott Myers. He was the first player from the Tennessee Titans to be given this honor since the relocation from Houston. He was the fifth player from the 1983 NFL draft class to be enshrined, joining Dan Marino, Eric Dickerson, John Elway, and Jim Kelly; Darrell Green and Richard Dent later became the sixth and seventh members. Matthews was selected as a guard on the NFL's All-Decade Team of the 1990s. In 2010, he was ranked 78th on The Top 100: NFL's Greatest Players by the NFL Network. At the 2020 Super Bowl, Matthews was named to the NFL 100 All-Time Team as one of the top 100 players of the first 100 years of the NFL.
Personal life
Matthews comes from a football family. A devout Christian as evidenced in his Hall of Fame Speech, he is the son of Clay Matthews Sr., who played in the NFL in the 1950s. His brother, Clay Jr., also played 19 seasons in the NFL. Bruce is the uncle of linebacker Clay Matthews III, former NFL linebacker Casey Matthews, and Kyle Matthews of USC football. Bruce and his wife, Carrie, have seven kids: Steven, Kevin, Marilyn, Jake, Mike, Luke, and Gwen. His son Kevin played center for Texas A&M until the 2009 football season and then played in the NFL for five years as a member of the Titans and Carolina Panthers. Jake Matthews played offensive tackle for Texas A&M and is currently the starting left tackle of the Atlanta Falcons. His son Mike played on the offensive line for Texas A&M, where he was the starting center. His youngest son, Luke, is currently a senior at Texas A&M. Matthews is the uncle of tight end Troy Niklas by way of his wife's sister.
Notes
See also
List of NFL players by games played
List of most consecutive starts and games played by National Football League players
References
External links
1961 births
Living people
American football centers
American football offensive guards
American football offensive tackles
Houston Oilers players
Tennessee Oilers players
Tennessee Titans players
USC Trojans football players
High school football coaches in Texas
All-American college football players
American Conference Pro Bowl players
National Football League players with retired numbers
Pro Football Hall of Fame inductees
Matthews football family
People from Arcadia, California
Players of American football from Raleigh, North Carolina
Players of American football from California
Houston Texans coaches
Tennessee Titans coaches
|
32090081
|
https://en.wikipedia.org/wiki/Infibeam
|
Infibeam
|
Infibeam Avenues Limited is an Indian multinational financial technology company which provides payment services globally under the brand name CCAvenue and e-commerce software services to businesses across various industries and the Government of India.
History
Infibeam Avenues Limited, formerly known as Infibeam Incorporation Limited, was founded in 2007 by Vishal Mehta. It is headquartered in GIFT City, Gujarat, India with offices in Delhi, Mumbai and Bengaluru. The company was started with an initial capital of 10-15 crore. The company started with an online marketplace and expanded into software enabling services in 2011.
In 2008, Infibeam acquired Picsquare.com, a personalised photo printing website. In 2014, it acquired Odigma, a digital marketing company, for .
In 2014, Infibeam Avenues became the first internet registry in India to launch a generic, top-level domain 'dot triple O' (.OOO).
In March 2017, Infibeam Avenues and Avenues (India) Private Limited entered into a binding agreement to merge and take control of CCAvenue through Equity Stake.
Infibeam Avenues limited became India's first listed e-commerce company on the BSE and the NSE in 2016.
In 2017, after the Avenues India Pvt Limited merger, Infibeam Avenues Limited entered into the fintech space. Infibeam Avenues limited partnered and contracted with Jio Platforms and its affiliates to license its enterprise ecommerce software platform and enterprise digital payment platform, for their internal businesses.
In 2017, it signed a Memorandum of Understanding (MOU) to acquire DRC Systems, a provider of ERP solutions and customized software for e-commerce applications.
Acquisitions and partnership
In 2017, Infibeam and its other partners received a contract to manage the Government e-Marketplace (GeM) portal.
In March 2017, Infibeam Avenues acquired DRC Systems India Pvt Ltd, a company which provides software services for enterprise E-commerce, ERP and related fields.
In March 2017, Infibeam Avenues and Avenues (India) Private Limited entered into a binding agreement to merge and take control of CCAvenue through an equity stake.
In July 2017, Infibeam Avenues along with consortium partners entered a contract for managed service provider from the Government of India to design, develop, implement, operate and maintain the Government e-Marketplace portal (GeM).
In October 2017, Infibeam Avenues entered into a MoU with IL&FS Townships & Urban Assets Limited (ITUAL) for undertaking and implementing projects in the digital space and e-commerce for the Central Government, various state governments and private partners.
In December 2018, Infibeam Avenues invested to take a 48% stake in Mumbai-based digital payments tech firm Instant Global Paytech Pvt. Ltd (IGPL), which operates Go Payments.
In August 2019, Infibeam Avenues partnered with Riyad Bank for digital payments services.
In June 2020, Infibeam Avenues acquired Bengaluru-based Cardpay Technologies Ltd.
In October 2020, Infibeam Avenues partnered with Oman's Bank Dhofar SAOG to provide CCAvenue payment gateway service to process online card transactions of various payment networks for Bank Dhofar SAOG and help the bank authorize online payments for its customers.
On October 14, 2020, Infibeam Avenues entered into a definitive agreement with JPMorgan Chase Bank to use the company's flagship enterprise payment platform CCAvenue for processing transactions for its enterprise clients. The company also signed a deal to license its e-commerce and payment software with Jio Platforms, Reliance Jio's digital platform venture.
In November 2020, Infibeam Avenues collaborated with Bank Muscat, a financial services provider in the Sultanate of Oman.
Businesses
Infibeam's businesses include an e-commerce platform software service through buildabazaar.com, digital payments, domain names and other internet services. The turnover of the company was reported to be 10 billion as of November 2013.
Digital payments
In 2016, Avenues India Pvt Ltd merged with Infibeam to enter into the fintech segment. Infibeam Avenues limited offers an enterprise digital payment services under the brand name CCAvenue to banks and enterprise clients in India and United Arab Emirates, Saudi Arabia, United States, and internationally. CCAvenue is a PCI DSS 3.2.1 compliant payment gateway platform offering merchants over 240 payment options including net-banking.
Enterprise ecommerce software platform
Infibeam Avenues' enterprise ecommerce software platform BuildaBazaar hosts India's largest online marketplace for government procurement. The company also entered into a definitive agreement with Jio Platforms to offer its enterprise software licence and enterprise digital payments platform to Jio Platforms and its associates for their internal businesses.
On November 20, 2017, Infibeam Avenues launched BillAvenue, an inter-operable digital bill payments platform built over the Bharat Bill Payment System (BBPS) infrastructure to enable service providers to accept bill payments from customers nationwide, through both online and offline channels.
Logistics aggregation platform
ShipDroid, a logistics aggregation platform, launched in January 2015 to provide uniformity of logistics services to small merchants across all courier partners. The platform covers 20,000 pincodes across 600 cities. The platform lets merchants choose modes of delivery such as surface, rail, or air mode, and they can opt for delivery commitment SLAs such as express or regular.
Data Centre infrastructure services
The company forayed into the segment of infrastructure or data centre-as-a-service, and built a state-of-the-art Tier-III data centre in GIFT City, Gandhinagar. It received Tier-III design certification from Uptime Institute as it is equipped with fully redundant and dual-powered servers, storage, network links and other IT components. Infibeam Avenues has partnered with IBM India to develop implement and promote blockchain capabilities on the LinuxOne platform, the first of its kind in India.
.OOO registry services
Infibeam Avenues launched the generic top level domain (GTLD) 'dot triple O' or '.OOO' internet registry in India in 2014. This new GTLD is aimed at providing alternate domain registration services to next generation businesses that were unable to locate their brand name or business name on popular domains.
E-book reader
In February 2010, Infibeam launched Pi, an e-book reader that uses E-Ink electronic paper technology. Pi has a six-inch screen with eight-level grayscale non-backlit display. The device has the capability to play music files and read Word documents, and supports 13Indian languages.
In 2011, Infibeam launched the second version of its e-book reader, Pi2. Pi2 is a touchscreen device and has wireless connectivity. The company claims the battery power lasts up to 8,000 page reads. The company has over 500 thousand e-books on its web store, which readers can purchase via the device.
Infibeam followed the Pi2 with an Android tablet called Phi.
Awards and recognition
Best Digital Payment Processor India Digital Summit 2020, by IAMAI
In February 2018, Infibeam was ranked 418th in The Financial Times and Statista's FT1000 High Growth Companies Asia - Pacific 2018.
Most Innovative Payment Service Provider & Fastest Growing Online Payment Service Provider (U.A.E) International Finance Awards 2019.
Infibeam was conferred with Consumer Durable & E-Retail of the Year Award at E-Retail Award 2018 by Franchise India held on 16–17 April 2018 in JW Marriott Hotel, New Delhi.
In February 2018, Infibeam was ranked 418th in The Financial Times and Statista's FT1000 High Growth Companies Asia - Pacific 2018.
Infibeam was conferred with Consumer Durable & E-Retail of the Year Award at E-Retail Award 2018 by Franchise India held on 16–17 April 2018 in JW Marriott Hotel, New Delhi. At the same award function, Infibeam's subsidiary CC Avenue was conferred with "Best Innovation in E-Commerce Payment".
References
Companies based in Ahmedabad
Indian companies established in 2007
Online financial services companies of India
Retail companies established in 2007
Technology companies established in 2007
Companies listed on the National Stock Exchange of India
Companies listed on the Bombay Stock Exchange
|
28531802
|
https://en.wikipedia.org/wiki/Man-Computer%20Symbiosis
|
Man-Computer Symbiosis
|
"Man-Computer Symbiosis" is the title of a work by J.C.R. Licklider, which was published in 1960. The paper represented what we would today consider a fundamental, or key text of the modern computing revolution.
The work describes something of Lickliders' vision for a complementary (symbiotic) relationship between humans and computers at a potential time of the future. According to Bardini, Licklider envisioned a future time when machine cognition (cerebration) would surpass and become independent of human direction, as a basic stage of development within human evolution. Jacucci gives the description of Lickliders' vision as being the very tight coupling of human brains and computing machines (c.f. brain, the term cohesion & the general definitions of the term coupling).
As a necessary pre-requisite of human-computer symbiosis, Licklider conceived of a thing known as the Thinking centre. Altogether these things were pre-conditions for the development of networks.
Streeter identifies as the main empirical element of the work as the time and motion analysis, which is shown under Part 3 of the work. In addition he identified two reasons for Licklider to have considered such a concept as a symbiotic human computer relationship at all as beneficial, firstly, that it might bring about an advantage emerging from the use of a computer, such that there are similarities with the necessary methodology of such a use (i.e. trial and error), to the methodology of problem solving through play, and secondarily, because of the advantage which results from using computers in situations of battle. Foster states Licklider sought to promote computer use in order to "augment human intellect by freeing it from mundane tasks."
As his personal motivating force, Streeter considers Licklider to be positing an escape from the limitations of the mode of computer use during his time, which was batch processing. Russell thinks Licklider was stimulated by an encounter with the newly developed PDP-1.
Parts of the work
The work shows the following contents:
Part 1
Part 1 is titled Introduction and has 2 sub-headings, Symbiosis (part 1.1) and Between "Mechanically Extended Man" and "Artificial Intelligence" (part 1.2).
Part 1.1 begins by showing a definition of the term symbiosis using the illustration of the relationship between two organisms, a fig-tree, and its pollinator, a type of fig-wasp. The article continues to sub-classify the concept of a symbiotic relationship between humans and computers within the larger defined thing which is the relationship between men and machines generally (man-machine systems), and outlines the intentions of its author in the possibility within the future of a relationship for the benefit of human thinking.
Part 1.2 references J. D. North's, "The rational behavior of mechanically extended man" to begin a brief discussion on mechanically extended man and proceeds to include developments and future developments within artificial intelligence.
Part 2
Part 2 is titled Aims of Man-Computer Symbiosis.
Part 3
Part 3 is titled Need for Computer Participation in Formulative and Real-Time Thinking and begins by continuing from a preceding statement on the likelihood of data-processing machines improving human thinking and problem solving. This part proceeds to an outline of an investigation sub-headed A Preliminary and Informal Time-and-Motion Analysis of Technical Thinking, in which Licklider investigated his own activities during the spring and summer of 1957. This discussion includes a statement on the currently understood definition of the term computer, as a wide class of calculating, data-processing, and information-storage-and-retrieval machines (c.f. Information storage and retrieval). Licklider begins a comparison between the so-called genotypic similarities between humans and computers, in the seventh passage of this part, with a definition of men as:
and ends with the acknowledgement of differences between inherent processing speed and use of language.
Part 4
Part 4 is titled Separable Functions of Men and Computers in the Anticipated Symbiotic Association. Licklider in the first passage of this part makes reference to the SAGE System. The text continues to identify ways in which theoretically active computers would function in ways including; to interpolate, extrapolate, convert static equations or logical statements into dynamic models (see also conceptual models).The part concludes with a statement of the functioning of a potential computer as performing diagnosis, pattern-matching, and relevance-recognizing.
Part 5
Part 5 is the final part of the article and is titled Prerequisites for Realization of Man-Computer Symbiosis. It has five sub-headings, Speed Mismatch Between Men and Computers, Memory Hardware Requirements, Memory Organization Requirements, The Language Problem, and Input and Output Equipment.
Part 5.3. mentions the concept of trie memory (E. Fredkin, "Trie memory," Communications of the ACM, Sept. 1960).
Part 5.4. begins initially by demonstrating the differences between human language and computer language, the latter in regards especially to FORTRAN, an Information Processing Language identified by J. C. Shaw, A. Newell, H. A. Simon, and T. O. Ellis (in A command structure for complex information processing, Proc. WJCC, pp. 119-128; May, 1958), ALGOL (and related systems), and continues from the second passage from the statement:
The above quote particularly recognises the existence of human goals (see also Goal orientation).
References of Man-Computer Symbiosis
At the time, acoustics represented one way a number of budding computer scientists entered the field. The work references 26 studies, of which fourteen are concerning acoustic studies and related areas of investigation, and fifteen on computing and studies related to this, including four related to studies on the subject of chess.
IRE Transactions
Institute of Radio Engineers (IRE) Transactions ceased publishing during 1962, and is now publishing instead as IEEE Transactions on Systems, Man, and Cybernetics: Systems, IEEE Transactions on Cybernetics, and IEEE Transactions on Human-Machine Systems.
Later developments
During August 1962, Licklider and Welden Clark joint published On-Line Man-Computer Communication.
MIT published a paper during 1966, written by Warren Teitelman, entitled Pilot: A Step Towards Man-Computer Symbiosis.
At the time of the publication of one paper, during 2004, there were very few computer applications known to the authors, which exhibited the qualities of computers identified by Licklider within his 1960 article, of being human-like with respect to being collaboratory and possessing the ability to communicate in human like ways. As part of their paper, the authors (N Lesh et al) mention a discussion of prototypes under development by the Mitsubishi Electric Research Laboratories.
See also
Symbiosis
History of the Internet#Inspiration
Darwin among the Machines
Electronics
Douglas Engelbart
GOAL agent programming language
Human factors integration
Intelligence amplification
References
External links
Douglas Engelbart, Augmenting Human Intellect, published by the Doug Engelbart Institute (originally published October 1962)
Cybernetics
History of human–computer interaction
Texts related to the history of the Internet
Transhumanism
1960 documents
|
51528560
|
https://en.wikipedia.org/wiki/Meizu%20PRO%205%20Ubuntu%20Edition
|
Meizu PRO 5 Ubuntu Edition
|
The Meizu PRO 5 Ubuntu Edition is a smartphone designed and produced by the Chinese manufacturer Meizu, which runs on Ubuntu Touch. It is an alternative version of the Meizu PRO 5. It was unveiled on February 17, 2016.
History
Rumors about an Ubuntu-powered edition of the Meizu PRO 5 appeared after photos of a Meizu PRO 5 running Ubuntu Touch 15.04 had been leaked. It was reported that this device would be showcased at the Mobile World Congress in April 2016.
Release
The Meizu PRO 5 Ubuntu was officially released on February 17, 2016. International pre-orders began on February 22, 2016, through online retailer JD.com.
Features
Ubuntu Touch
The PRO 5 Ubuntu Edition is running Ubuntu Touch, which is a mobile operating system based on the Ubuntu linux distribution developed by Canonical. Its goal is to provide a free and open-source mobile operating system and deliver a different approach to user experience by focusing on so-called “scopes” instead of traditional apps.
Hardware and design
The technical specifications and outer appearance of the PRO 5 Ubuntu Edition is identical with the Meizu PRO 5.
The Meizu PRO 5 features a Samsung Exynos 7420 Octa with an array of eight ARM Cortex CPU cores, an ARM Mali-T760 MP8 GPU and 3 GB of RAM. Meizu Global Brand Manager Ard Boudeling explained in November 2015 that Meizu decided to use the Samsung Exynos SoC because it is “currently [..] the only option if you want to build a genuine premium device”.
The Meizu PRO 5 Ubuntu Edition has a full-metal body, which measures x x and weighs . It has a slate form factor, being rectangular with rounded corners and has only one central physical button at the front.
The PRO 5 Ubuntu Edition is only available in with 32 GB of internal storage and with a champagne gold body.
The PRO 5 Ubuntu Edition features a 5.7-inch AMOLED multi-touch capacitive touchscreen display with a (FHD resolution of 1080 by 1920 pixels. The pixel density of the display is 387 ppi.
In addition to the touchscreen input and the front key, the device has a volume/zoom control and the power/lock button on the right side and a 3.5mm TRS audio jack, which is powered by a dedicated Hi-Fi amplifier supporting 32-bit audio with a frequency range of up to 192 kHz.
The PRO 5 Ubuntu Edition uses a USB-C connector for both data connectivity and charging.
The Meizu PRO 5 Ubuntu Edition has two cameras. The rear camera has a resolution of 21.16 MP, a ƒ/2.2 aperture and a 6-element lens. Furthermore, the phase-detection autofocus of the rear camera is laser-supported.
The front camera has a resolution of 5 MP, a ƒ/2.0 aperture and a 5-element lens.
Reception
While the device itself had been praised for its specifications and build quality, critics have criticized the Ubuntu Touch operating system.
GSMArena stated that "overall, all the bits and pieces that made the Meizu Pro 5 great are still present, but even on paper, we can already see quite a lot of compromises brought about by the new OS in the Ubuntu Edition at hand".
Softpedia gave the device a rating of 4 out of 5 stars, concluding that they "would recommend [the] Meizu PRO 5 Ubuntu Edition to those who like smartphones with big screens, Ubuntu Phone fans [..], as well as very curious people who get bored easily and always enjoy trying something new".
See also
Meizu
Meizu PRO 5
Comparison of smartphones
List of open-source mobile phones
References
External links
Official product page Meizu
Ubuntu Touch devices
Mobile phones introduced in 2015
Meizu smartphones
Discontinued smartphones
|
57037415
|
https://en.wikipedia.org/wiki/Unix%20%28disambiguation%29
|
Unix (disambiguation)
|
Unix may refer to:
Unix, a family of operating systems, the first originally developed by Bell Labs at AT&T
Single UNIX Specification, the trademark UNIX and the terms of its use, now owned by The Open Group
POSIX (IEEE 1003 or ISO/IEC 9945), the basic Unix interface standard
OpenServer (formerly SCO Unix, System V) the owner's version of the operating system, now owned by Xinuos
UNIX System V, the orthodox family of licensed Unices, canonical form being SCO Unix that became Open Server
BSD Unix, the widespread family of licensed Unices originating from UCBerkeley
BSD licenses, a family of software licenses, sometimes called "Unix license"
Unix System Laboratories, the division that developed unix, especially the System V family
SCOsource, the owner of unix intellectual property, especially as it relates to the canonical form SCO Unix
List of Unix systems, for a Unix operating system
Unix-like, the extended family of operating systems inspired by and having a general similarity with Unix
Unix Magazine, a defunct magazine covering unix and unix-like software sector
UNIX Review, a defunct magazine covering unix software sector
See also
History of Unix
Unix wars, the battle between BSD Unix and System V for supremacy
UnixWorld, a defunct magazine covering unix software sector
UNICE (disambiguation)
Eunice (disambiguation)
Eunuch (disambiguation)
|
27750422
|
https://en.wikipedia.org/wiki/Harry%20L.%20Nelson
|
Harry L. Nelson
|
Harry Lewis Nelson (born January 8, 1932) is an American mathematician and computer programmer. He was a member of the team that won the World Computer Chess Championship in 1983 and 1986, and was a co-discoverer of the 27th Mersenne prime in 1979 (at the time, the largest known prime number). He also served as editor of the Journal of Recreational Mathematics for five years. Most of his professional career was spent at Lawrence Livermore National Laboratory where he worked with some of the earliest supercomputers. He was particularly noted as one of the world's foremost experts in writing optimized assembly language routines for the Cray-1 and Cray X-MP computers. Nelson has had a lifelong interest in puzzles of all types, and since his retirement in 1991 he has devoted his time to his own MiniMax Game Company, a small venture that helps puzzle inventors to develop and market their products.
In 1994, Nelson donated his correspondence from his days as editor of the Journal of Recreational Mathematics to the University of Calgary Library as part of the Eugène Strens Recreational Mathematics Special Collection.
Biography
Early years
Nelson was born on January 8, 1932, in Topeka, Kansas, the third of four children. He attended local schools and was active in the Boy Scouts, earning the rank of Eagle Scout. Nelson attended Harvard University as a freshman, but then had to drop out for financial reasons. He attended the University of Kansas as a sophomore, but was able to return to Harvard for his junior and senior years, receiving a bachelor's degree in mathematics from Harvard in 1953. In 1952, just before the start of his senior year, he married his high school sweetheart, Claire (née Rachael Claire Ensign). After graduating, he was inducted into the U.S. Army, but was never deployed overseas. He was honorably discharged in 1955, having attained the rank of sergeant. He enrolled in graduate studies at the University of Kansas, earning a master's degree in mathematics in 1957. It was during this period that he became fascinated by the then-new programmable digital computer. Nelson worked towards a Ph.D. until 1959, but the combination of his GI Bill educational benefits running out, needing to support a wife and three children, and the mathematics department rejecting his proposal to do his thesis on computers convinced him to leave the university without completing his Ph.D., and to get a job.
Initially, Nelson worked for Autonetics, an aerospace company in southern California. In 1960 he went to work for the Lawrence Radiation Laboratory (later renamed Lawrence Livermore National Laboratory or LLNL), in Livermore, California. He remained working there until his retirement in 1991. Nelson worked on a variety of computers at LLNL, beginning with the IBM 7030 (nicknamed Stretch). In the 1960s, early units of a new computer were typically delivered as "bare metal," i.e. no software of any kind, including no compiler and no operating system. Programs needed to be written in assembly language, and the programmer needed to have intimate and detailed knowledge of the machine. A lifelong puzzle enthusiast, Nelson sought to understand every detail of the hardware, and earned a reputation as an expert on the features and idiosyncrasies of each new machine. Over time he became the principal person at LLNL in charge of doing acceptance testing of new hardware.
The 27th Mersenne Prime Number
During the process of acceptance testing, a new supercomputer would typically run diagnostic programs at night, looking for problems. During the acceptance testing of LLNL's first Cray-1 computer, Nelson teamed up with Cray employee David Slowinski to devise a program that would hunt for the next Mersenne prime, while simultaneously being a legitimate diagnostic program. On April 8, 1979, the team found the 27th Mersenne prime: 244497 - 1, the largest prime number known at that time.
Computer Chess
In 1980, Nelson came across a copy of the chess program Cray Blitz written by Robert Hyatt. Using his detailed knowledge of the Cray-1 architecture, Nelson re-wrote a key routine in assembly language and was able to significantly speed up the program. The two began collaborating along with a third team member, Albert Gower, a strong correspondence chess player. In 1983, Cray Blitz won the World Computer Chess Championship, and successfully defended its title in 1986.
The 1986 Championship was marred by controversy when the HiTech team, led by Hans Berliner, accused the Cray Blitz team of cheating. The charge was investigated for a few months by the tournament director, David Levy, and dismissed. Despite the dismissal, the experience somewhat soured the computer chess scene for Nelson, although he remained active until the ACM discontinued the annual computer chess tournaments in 1994.
Puzzles and problems
He is active with the International Puzzle Party, and is a longtime contributor to the Journal of Recreational Mathematics. He served as Editor of the Journal for 5 years, and continues to sit on its editorial board.
References
Further reading
Robert M. Hyatt and Harry L. Nelson, "Chess and Supercomputers, details on optimizing Cray Blitz", proceedings of Supercomputing '90 in New York (354-363).
External links
Transcript of a talk about Cray Blitz given at a University of California, Davis computer science seminar
Video of an interview with Harry Nelson from the Computer History Museum
Image of Nelson in front of a Cray X-MP
Computer programmers
Computer chess people
Academic journal editors
Recreational mathematicians
Living people
1932 births
Harvard University alumni
University of Kansas alumni
|
35235737
|
https://en.wikipedia.org/wiki/Disk%20Drill%20Basic
|
Disk Drill Basic
|
Disk Drill Basic is a freeware version of Disk Drill, a data recovery utility for Windows and macOS, developed by Cleverfiles. Disk Drill Basic was introduced in 2010 and was primarily designed to recover deleted or lost files from hard disk drives, USB flash drives and SSD drives with the help of Recovery Vault technology. In 2015 CleverFiles released Disk Drill for Windows.
Recovery vault
The core of Disk Drill Basic is a Recovery Vault technology which allows to recover data from a medium that was secured by Recovery Vault beforehand.
Recovery Vault runs as a background service and remembers all metadata and properties of the deleted data. Thus making it possible to restore deleted files with their original file names and location.
Supported file systems
The Mac version of Disk Drill Basic provides recovery from HFS/HFS+ and FAT disks/partitions (only the paid Pro version can actually recover files, the Free version will only allow Previewing files).
In August 2016, Disk Drill 3 announces support of macOS Sierra.
Data backup
Disk Drill Basic can be also used as a backup utility for creating copies of the disk or partition in DMG images format.
Version for Windows
In February 2015, CleverFiles have launched a Windows version of its data recovery software for macOS. While in beta, Disk Drill for Windows is licensed as a freeware and allows to recover the deleted files from storage devices that can be accessed from Windows PC. Disk Drill for Windows also includes the Recovery Vault technology and works on any Windows XP system or newer (Windows Vista, 7, 8, 10). The software is compatible with FAT and NTFS, as well as HFS+ and EXT2/3/4 file systems.
In September 2016, CleverFiles have announced the immediate availability of Disk Drill 2 for Windows, the new version of the expert-level data recovery app.
Release history
See also
Data recovery
Data remanence
File deletion
List of data recovery software
Undeletion
References
External links
Data recovery software
Utilities for macOS
Freeware
|
21108277
|
https://en.wikipedia.org/wiki/Troy%20Trojans%20baseball
|
Troy Trojans baseball
|
The Troy Trojans baseball team is the varsity intercollegiate baseball team of Troy University, located in Troy, Alabama, United States. It competes in the NCAA Division I Sun Belt Conference. The program began play in 1911. In 1986 and 1987, Troy won Division II national championships under head coach Chase Riddle. As a Division II program, the team won 10 conference titles and appeared in 14 NCAA Regionals and 7 College World Series.
As of the end of the 2020 season, the program's overall record is 1,774–1,114–3. Troy is the 36th all-time winningest baseball program among all Division 1 programs.
History
Early history
Few schools in the South, especially in the state of Alabama, possess as rich a history as that of the Troy baseball program. In the past 40 years alone, Trojan baseball squads have claimed more than 1,300 victories, 14 conference championships, 7 NCAA Regional crowns, and back-to-back Division II NCAA National Championships in 1986 and 1987.
As early as the turn of the 20th century, old photographs show evidence that Troy fielded an intercollegiate baseball team in the early 1900s, but school records only date back to 1931. In the teams early years, they were known as the "Teachers" since the college was primarily an educational institution for teachers. In 1931, the Troy Normal School moved all home games to what is now known as Riddle-Pace Field. The previous playing field, which was located on the quad in front of Shackelford Hall, was the original playing field. This relocation occurred because of baseballs breaking the windows of Shackelford Hall.
Chase Riddle era
In 1979, Troy State hired Chase Riddle, who was a manager and scout for the St. Louis Cardinals major league baseball team. In his first year as the head coach, he led the Trojans to a then-school-record 33 wins and a second place finish in the Gulf South Conference Eastern Division. That same season, Troy performed a two-game sweep of the Alabama Crimson Tide; a big accomplishment for an in-state Division II school.
In Riddle's second season at the helm, he went on to accomplish even more. His 1980 team finished the season 30–12, garnering a significant win over the then #7-ranked Florida State Seminoles by a score of 5–3. The Trojans would wind up winning the Gulf South Conference championship and the NCAA Central Regional, and making it all the way to the NCAA College World Series. They were eliminated with a 1–2 record from the World Series, but Troy had established themselves as a new powerhouse baseball program.
The 1981 and 1982 seasons were also huge successes for Riddle and his program. The program won another Gulf South Conference championship in 1981 with yet another appearance in the College World Series, finishing the season with a 37–10 record. The 1982 season would be nearly a copy of the previous season, with Troy winning yet another conference championship and making it to the College World Series yet again. Troy would also garner big wins against Auburn and Clemson that same year.
From 1983 to 1985, the Trojans would go 105–42, making three NCAA Regional appearances and two College World Series appearances.
In 1986, the Trojans defeated Columbus State, 5–0, to win their first NCAA College World Series (Division II). They finished the season as the #1-ranked team in the country, with a 46–8 overall record, and a 12–0 conference record to win another Gulf South Conference title. In 1987, they followed up with yet another national championship by defeating Tampa, 7–5. For his successes, head coach Chase Riddle was named National Head Coach of the Year by the American Baseball Coaches Association in 1986 and 1987.
Chase Riddle would wind up retiring in 1990, finishing his notable career with a record of 434–149–2. Less than one year after his retirement, Riddle's #25 jersey that he wore was retired. Riddle was inducted into the Alabama Sports Hall of Fame in 2000, the Wiregrass Sports Hall of Fame in 2005, and later into the Troy Sports Hall of Fame in 2012.
John Mayotte era
Upon transitioning from Division II to Division I, the Trojans finished their tenure in Division II with a 38–25 overall record against competition in NCAA postseason play.
In the Trojan's last season of play in Division II, coach John Mayotte helped continue the trend of Trojan baseball success, leading the team to another College World Series appearance, only to be eliminated with a 2–2 record. The Trojans finished that season ranked #3 in the nation.
After coach Mayotte helped lead Troy into their new era of Division I baseball, he led the team to their first Division I NCAA Regional in 1995, just Troy's second season of being in Division I. Troy was eliminated in the Regional by Florida State and Ole Miss that season. In 1997, Mayotte once again led Troy to another NCAA Regional, where Troy was again eliminated, this time by Alabama and Southern California.
Bobby Pierce era
In 2003, Troy hired Bobby Pierce as head coach. In his 4th season (2006), he led Troy to their 3rd ever appearance in an NCAA Regional. In what would become one of Pierce's best seasons, his Trojans went 47–16 and won their first Sun Belt Conference championship. They entered into the Tuscaloosa Regional as the #2-seed team, while also holding a national ranking of #27 by the National Collegiate Baseball Writers Association. Troy would face #3-seed Southern Miss in the first matchup, with Troy scoring their first victory in a Regional since joining Division I, beating Southern Miss by a score of 10–8. Troy would fall to #1-seed Alabama in the next round, only to face Southern Miss and beating them yet again. Troy would finally be eliminated from the Regional by Alabama in the final game. Three players from the 2006 team were taken that year in the MLB Draft: Tom King, Mike Felix, and Jarred Keel.
The Trojans went on to have a lot of success over the years since that 2006 season, finding themselves ranked in the Top 25 occasionally almost every season, yet never finishing with a national ranking. That trend would change in 2013.
The 2013 season saw Troy have one of its strongest batting lineups in the program's history. The Trojans were in the Top 10 in the NCAA in total home runs, hitting 54 that season. After getting big wins over Texas Tech and Auburn during the season, the Trojans won another Sun Belt Conference championship, going 40–18 during the regular season. Troy would be ranked #26 by Collegiate Baseball going into the Tallahassee Regional, garnering them the #3-seed. They would face Alabama in the first round, defeating the Crimson Tide 5–2, only to lose to Florida State in the next round. After being put into the elimination bracket, Troy had to once again face the Alabama Crimson Tide. The Trojans would defeat the Tide yet again, this time in a thriller by a score of 9–8. Troy faced Florida State in the finals of the Regional, but wound up being eliminated by the Seminoles, 4–11.
The Trojans would finish the 2013 season with their first ever Top 25 rankings, being ranked #23 by Collegiate Baseball and #25 by Baseball America.
Bobby Pierce finished his career with a 450–313 record at Troy, leading the program to their first Top 25 finish and 4 NCAA Regional appearances. For his accomplishments, Pierce was inducted into the Alabama Baseball Coaches Hall of Fame in 2010, the Wiregrass Hall of Fame in 2017, and the Troy Sports Hall of Fame in 2018.
Mark Smartt era
After the 2016 season, long-time assistant coach Mark Smartt was hired as Troy's new head coach. Smartt is a Troy University alumnus and was a member of the Troy State Trojans baseball teams that was Division II national championships in 1986 and 1987.
In Smartt's second season at the helm, his team would go 31-25, performing a rare accomplishment by defeating every in-state on the schedule in 2017. The Trojans would defeat Alabama, Auburn, UAB, South Alabama, Alabama State, Samford, and Jacksonville State that season.
In 2018, in just Smartt's third season as head coach, he led Troy to a 2nd-place finish in the Sun Belt Conference and finished with a 42-21 record that season. The team would make it all the way to the Sun Belt Conference Tournament finals against #19 Coastal Carolina, only to lose 6-11 to the Chanticleers. Troy wound up receiving an at-large bid to the NCAA Tournament, where they would face #18 Duke in the first round, defeating the Blue Devils by a score of 6-0. The Trojans' fortunes would fade from there though, losing to #9 Georgia in the next round, and finally being eliminated in a re-match with Duke. Troy finished the season with the sixth-most wins in school history, while garnering a few wins over Top 25 ranked teams, including #17 Coastal Carolina, #18 Duke, and #22 Auburn.
Coaches
Riddle-Pace Field
Riddle-Pace Field, located on the university's campus, is the program's home venue. It is named for Chase Riddle, former head coach of the program, and Matthew Downer Pace, who served Troy University from 1891 to 1941 as Professor of Mathematics, Dean, and President.
The stadium features a brick concourse, a three-story press box, restrooms, a concession stand, and a merchandise booth. The stadium has a capacity of 2,000 spectators, which includes 1,700 bleacher seats and 300 chair-back seats. More spectator areas are located beyond the left field fence and adjacent to the home plate dugout. The Lott Baseball Complex was built along the left field fence, which houses coaches offices, player locker rooms and lounge, and an indoor batting cage.
The field had its grass turf removed and was replaced with artificial grass turf in 2008. Troy was one of only three college baseball programs at the time to switch from grass fields to artificial turf. A state-of-the-art drainage system was installed with the new artificial turf, lending the Troy baseball team the ability to play games in a very short amount of time after heavy rains come through.
The field has become known for its "Monster" wall in right field, a 27-foot tall black wall with a built-in scoreboard and video board. It's currently one of the largest outfield walls in all of college baseball.
Attendance
Attendance Rankings
Troy has been ranked in the NCAA's Top 50 for annual average home attendance for multiple seasons since the early 2000s.
Attendance Records
Below is a list of Troy's top six single-game attendance figures.
All-Americans
Troy has produced 58 All-American players, as well as 6 Academic All-Americans. Since Troy joined the NCAA's Division I in 1994, the program has had 16 players named as All-Americans.
The following is a list of all First Team All-Americans Troy has had since joining Division I:
Award Winners/Finalists
ABCA National Coach of the Year
Chase Riddle – 1986, 1987
John Mayotte – 1993Rawlings Gold Glove AwardBrett Henry – 2009
Brandon Lockridge – 2018Brooks Wallace Award FinalistAdam Bryant – 2010
Tyler Vaughn – 2013
MLB Draft
Troy has had 59''' total players selected in the MLB Draft in its history. Click the "show" button at the top corner of the table below in order to see the list of Trojans that have been drafted.
Notable alumni
The following players made their way onto Major League rosters from either being drafted or signed as free agents:
Clint Robinson
Chase Whitley
Fred "Scrap Iron" Hatfield
Mackey Sasser
Danny Cox
TJ Rivera
Mike Rivera
Tom Drake
Danny Breeden
Mike Pérez
Ray Stephens
Tom Gregorio
Grady Wilson
Championships
Since Troy's first year in Division I in 1994, the program has won six regular-season conference titles and three conference tournament titles.
In the team's short stint in the Mid-Continent Conference, they won the regular season title in three straight seasons, from 1995 to 1997, to go along with two tournament titles in 1995 and 1997. In Troy's last season in the Atlantic Sun Conference in 2005, they won the regular season title. Since joining the Sun Belt Conference, the team has won three regular season titles in 2006, 2011, and 2013. Troy also won the SBC tournament title in 2006.
The program has compiled a total of 20 conference championships and two D-II national championships since its inception.
Conference Season Championships
Conference Tournament Championships
Division II National Championships
In 1986, the Trojans defeated Columbus State, 5–0, to win the Division II College World Series. In 1987, they followed up with yet another national championship by defeating Tampa, 7–5.
Then head coach Chase Riddle was named National Head Coach of the Year in 1986 and 1987 for his successes of winning two national championships in a row.
Top 25 Finishes
Yearly results
Division I
Postseason Results
Division I
NCAA Regionals
Division II
NCAA Regionals
College World Series
See also
List of NCAA Division I baseball programs
References
External links
|
3023105
|
https://en.wikipedia.org/wiki/Trojan%20Odyssey
|
Trojan Odyssey
|
Trojan Odyssey is a Dirk Pitt novel by Clive Cussler, published first in 2003.
Plot summary
The book opens with a fictional historical overview/flashback to events of Homer's Odyssey, but alters the original plot. In the present day, Dirk Pitt, his son Dirk Pitt, Jr., his daughter Summer Pitt, and friend Al Giordino are involved in the search for the source of a brownish contamination in the ocean's waters, which leads to a diabolical plot that they must unravel and ultimately topple. As this is occurring, discoveries relating to the "true" tale of the Odyssey are made.
The villain is the mysterious billionaire businessman Specter, a huge man who disguises his identity by wearing sunglasses, a hat, and a scarf over his face.
The book also features a significant event between Dirk Pitt and Congresswoman Loren Smith. He and Al retire from their life of daredevilry and settle down. Pitt asks for the hand of Loren who also steps down.
Pitt assumes the responsibility of head of NUMA as Admiral Sandecker accepts the Vice Presidency. This marks a change in Dirk Pitt Sr. series, as confirmed by his next novel, which features Dirk Pitt Jr. as the primary protagonist.
As with every Dirk Pitt novel, this one features a classic car, in this case a Marmon V-16 Town Car. A custom-built 1952 Meteor DeSoto hot rod modified with a engine is briefly mentioned.
Trivia
Cussler dedicated Trojan Odyssey to his wife, who died due to cancer in 2003.
The main antagonist in the novel shares his alias ("Specter") with the father of the antagonists in Inca Gold.
Loren's father mysteriously came back to life and attended the wedding, long after being killed and his body stashed on a sunken aircraft in a remote Colorado lake as depicted in Vixen 03.
See also
Where Troy Once Stood
References
External links
Paperbackbazaar Review of Trojan Odyssey (about halfway down the page).
2003 American novels
Dirk Pitt novels
G. P. Putnam's Sons books
Novels based on the Odyssey
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.